Measuring and Controlling Product Quality

Are you Agile? See steering product quality in agile teams

Wouldn’t it be nice if you could have more insight into the quality of a product, while it is developed, instead of afterward? If you could measure the quality of your product, and act when there is a risk that quality would become lower than required by your customer?

One way to measure quality is by using defect data to understand flaws in the way you develop and test your software. Defects found during testing or by customers contain lots of information that you can use to understand and improve the way you work. By using the data you can measure your quality, and the saying is that you can’t manage what you can’t measure. So managing (and improving) the quality starts with measuring it.

The quality of a product can be measured for instance with fault density. Fault density is the number of defects that have been found related to the size of the product (e.g. in function points or lines of code). It is relatively easy to measure and report. But fault density has some major drawbacks; you can only measure it after a phase is completed, and it does not give any insight on the causes if a product has a quality risk.

Wait a minute you might say, we’re agile. Can we also steer product quality? Of course, you can! I have also applied the Fault Slip Through metric also in Agile and Scrum projects, which is described in steer product quality in agile teams. It’s quite similar to incremental or waterfall projects, so let’s take a look at that first.

Defect Slippage

My preference is to measure Defect Slippage or Fault Slip Through. This measurement shows how many defects have been found in reviews, testing and by customers. By doing Root Cause Analysis (RCA) you can get ways to detect defects earlier thus saving time and money.

You can make a data matrix, showing where you have found defects, and where they could have been found. The defect slippage, i.e. the percentage of defects that have slipped through review or early testing is an indicator of the quality of your testing, and of the quality of the product.

To build a matrix with defect data, you need to collect data from reviews and testing, and classify the defects. A minimum classification is where the defect has been found, and where the defect could have been found. Additionally, you can classify where the defect was made. This could be during the definition of the requirements, or when the product architecture was set up, or it could be a coding error. Of course classifying defect needs professional judgment, and has to be done by members from the team who developed and tested the software.

Reducing Fault Slip Through

Let me give some examples how Fault Slip Through can be used. At one project we collected defect data for each increment. After some increments, we discovered that the number of defects found in function test was increasing, while at the same time reviews were finding less defect.

After analysis of some defects, we found that the developers had insufficient understanding of the design rules, hence making defects in their code which they didn’t find during reviews. Training them in the design rules helped the lower the number of defects made, and increasing the number of defects found during reviews. Function test found less defects, since most of them were already found earlier. Defect slippage was significantly reduced and the project saved time in function testing, much more time then was invested in the training and reviews.

Another example is to use Fault Slip Through analysis to give a significant reduction of defects found by customers, thus lowering maintenance costs. If defects are found by customers, the cost to solve them is much higher compared to when they are found before release. Fault Slip Through has been measured and it was proved that investments in finding defects earlier and preventing defects really paid off. Even reducing defect slippage with just 5% or 10%, which can be reached within months, gave already major savings. And when you also take into account what defects do to the quality reputation of your product, then it becomes even more important to lower defect slippage.

Project Defect Model

I created an Excel spreadsheet to measure and analyze defect slippage and estimate the number of latent defects in a product at release. It’s called Project Defect Model. This model supports analysis of the data with both calculated values and graphs comparing actuals to estimates in terms of current status and trends.

The Project Defect Model was developed by me when I was working for Ericsson. We used this model in many (waterfall, iterative, and agile) projects to control product quality and it helped us to meet delivery dates, stay within budget, evaluate risks, and make decisions regarding release and maintenance. We also collect process and product data to improve the performance of the R&D organization.

For more details about the Project Defect Model, the pilot project where we tried it, and application of this model within Ericsson, please see:

The 2nd edition of my book What Drives Quality will feature a detailed description of this model and explain how it can be used by agile teams.

Conclusions

As my first examples show, measuring and managing Fault Slip Through can help you to improve the quality of your products. And it can also be used in incremental projects. In a future article, I will describe how to steer product quality in agile teams.

(This blog was posted may 22, 2011, and updated Oct 6, 2011: Extended with graphs, and updated Jan 27 2013: added agile, and updated Jun 18, 2018: added the Project Defect Model).


Also published on Medium.

Ben Linders

I help organizations with effective software development and management practices. Active member of several networks on Agile, Lean and Quality, and a frequent speaker and writer.

This Post Has 6 Comments

  1. Hello Ben,

    I see defect density has one of the important measure which will also help to estimate the defects that need to be uncover in the product or project based on the size/complexity of the product/project.This helps to make sure defects are contained in specific phases before it escapes.

    Ofcourse as you said actual defect density can only be found after we complete the implementation phase but this measure need to be available in order to see the release quality to customer and to estimate the defects for future projects.

    Also by tracking the actual number of defects, phase wise we can arrive at Project defect model for future projects considering the project environment is same.By knowing the factor of variations we can arrive at some defect prediction model using correlation or regression analysis

    Overall a good article on FST

  2. Hi Ben, I really like the idea! When this is used in an agile context where no bugs are expected to leave the sprints, for measuring the actual quality it is vital to get feedback from production. If the bugs that actually did slip through are analyses a team can really improve.

    1. Exactly. It’s not about the figures, but about defects that customers find, which should have been found earliet. Or shouldn’t be made at all, that where Root Cause Analysis helps: http://www.benlinders.com/2011/business-reason-for-root-cause-analysis/

  3. Although this post is rather “old” it’s still valid. I am currenctly trying to setup a way to measure the number of defects found during each “quality check”. However, there is one thing I find hard to do. How do you measure the number of issues found during code reviews? Especially when the team is working on stories created in a tool (Jira) and add comments to the stories. Unless you want to go over each ticket and count manually the number of defects added as comment. Do you have experience in this matter?

  4. Measuring review defects can be hard indeed. As an alternative you can measure the number of revisions of a code module.

    It’s not a direct replacement, but can be used as a trigger: When a code module changes often you can investigate the causes. It could be that a developer is doing TDD and that modules are changed often. But it can also be that a code module is error prone and perhaps a quality risk.

    For measuring quality in Agile I use a different approach, see steering product quality in agile teams.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

×
×

Cart