Are you Agile? See steering product quality in agile teams
Wouldn’t it be nice if you could have more insight into the quality of a product, while it is developed, instead of afterward? If you could measure the quality of your product, and act when there is a risk that quality would become lower than required by your customer?
One way to measure quality is by using defect data to understand flaws in the way you develop and test your software. Defects found during testing or by customers contain lots of information that you can use to understand and improve the way you work. By using the data you can measure your quality, and the saying is that you can’t manage what you can’t measure. So managing (and improving) the quality starts with measuring it.
The quality of a product can be measured for instance with fault density. Fault density is the number of defects that have been found related to the size of the product (e.g. in function points or lines of code). It is relatively easy to measure and report. But fault density has some major drawbacks; you can only measure it after a phase is completed, and it does not give any insight on the causes if a product has a quality risk.
Wait a minute you might say, we’re agile. Can we also steer product quality? Of course, you can! I have also applied the Fault Slip Through metric also in Agile and Scrum projects, which is described in steer product quality in agile teams. It’s quite similar to incremental or waterfall projects, so let’s take a look at that first.
My preference is to measure Defect Slippage or Fault Slip Through. This measurement shows how many defects have been found in reviews, testing and by customers. By doing Root Cause Analysis (RCA) you can get ways to detect defects earlier thus saving time and money.
You can make a data matrix, showing where you have found defects, and where they could have been found. The defect slippage, i.e. the percentage of defects that have slipped through review or early testing is an indicator of the quality of your testing, and of the quality of the product.
To build a matrix with defect data, you need to collect data from reviews and testing, and classify the defects. A minimum classification is where the defect has been found, and where the defect could have been found. Additionally, you can classify where the defect was made. This could be during the definition of the requirements, or when the product architecture was set up, or it could be a coding error. Of course classifying defect needs professional judgment, and has to be done by members from the team who developed and tested the software.
Reducing Fault Slip Through
Let me give some examples how Fault Slip Through can be used. At one project we collected defect data for each increment. After some increments, we discovered that the number of defects found in function test was increasing, while at the same time reviews were finding less defect.
After analysis of some defects, we found that the developers had insufficient understanding of the design rules, hence making defects in their code which they didn’t find during reviews. Training them in the design rules helped the lower the number of defects made, and increasing the number of defects found during reviews. Function test found less defects, since most of them were already found earlier. Defect slippage was significantly reduced and the project saved time in function testing, much more time then was invested in the training and reviews.
Another example is to use Fault Slip Through analysis to give a significant reduction of defects found by customers, thus lowering maintenance costs. If defects are found by customers, the cost to solve them is much higher compared to when they are found before release. Fault Slip Through has been measured and it was proved that investments in finding defects earlier and preventing defects really paid off. Even reducing defect slippage with just 5% or 10%, which can be reached within months, gave already major savings. And when you also take into account what defects do to the quality reputation of your product, then it becomes even more important to lower defect slippage.
Project Defect Model
I created an Excel spreadsheet to measure and analyze defect slippage and estimate the number of latent defects in a product at release. It’s called Project Defect Model. This model supports analysis of the data with both calculated values and graphs comparing actuals to estimates in terms of current status and trends.
The Project Defect Model was developed by me when I was working for Ericsson. We used this model in many (waterfall, iterative, and agile) projects to control product quality and it helped us to meet delivery dates, stay within budget, evaluate risks, and make decisions regarding release and maintenance. We also collect process and product data to improve the performance of the R&D organization.
For more details about the Project Defect Model, the pilot project where we tried it, and application of this model within Ericsson, please see:
- Controlling Project Performance by Using the Project Defect Model presented at PSQT West.
- A Proactive Attitude Toward Quality: The Project Defect Model published in Software Quality Professional.
The 2nd edition of my book What Drives Quality will feature a detailed description of this model and explain how it can be used by agile teams.
As my first examples show, measuring and managing Fault Slip Through can help you to improve the quality of your products. And it can also be used in incremental projects. In a future article, I will describe how to steer product quality in agile teams.
(This blog was posted may 22, 2011, and updated Oct 6, 2011: Extended with graphs, and updated Jan 27 2013: added agile, and updated Jun 18, 2018: added the Project Defect Model).