Measuring and Controlling Product Quality

Are you Agile? See steering product quality in agile teams

Wouldn’t it be nice if you could have more insight into the quality of a product, while it is developed, instead of afterwards? If you could measure the quality of your product, and act when there is a risk that quality would become lower than required by your customer?

One way to measure quality is by using defect data to understands flaws in the way you develop and test your software. Defects found during testing or by customers contain lots of information that you can use to understand and improve the way you work. By using the data you can measure your quality, and the saying is that you can’t manage what you can’t measure. So managing (and improving) the quality starts with measuring it.

The quality of a product can be measured for instance with fault density. Fault density is the number of defects that has been found related to the size of the product (e.g. in function points or lines of code). It is relatively easy to measure and report. But fault density has some major drawbacks; you can only measure it after a phase is completed, and it does not give any insight on the causes if a product has a quality risk.

Wait a minute you might say, we’re agile. Can we also steer product quality? Of course you can! I have also applied the Fault Slip Through metric also in Agile and Scrum projects, which is described in steer product quality in agile teams. It’s quite similar to incremental or waterfall projects, so let’s take a look at that first.

Defect Slippage

My preference is to measure Defect Slippage, or Fault Slip Through. This measurement shows how many defects have been found in reviews, testing and by customers. By doing Root Cause Analysis (RCA) you can get ways to detect defects earlier thus saving time and money.

You can make a data matrix, showing where you have found defects, and where they could have been found. The defect slippage, i.e. the percentage of defects that have slipped through review or early testing is an indicator of the quality of your testing, and of the quality of the product.

To build a matrix with defect data, you need to collect data from reviews and testing, and classify the defects. A minimum classification is where the defect has been found, and where the defect could have been found. Additionally you can classify where the defect was made. This could be during the definition of the requirements, or when the product architecture was set up, or it could be a coding error. Of course classifying defect needs professional judgment, and has to be done by members from the team who developed and tested the software.

Reducing Fault Slip Through

Let me give some examples how Fault Slip Through can be used. At one project we collected defect data for each increment. After some increments we discovered that the number of defects found in function test was increasing, while at the same time reviews were finding less defect.

After analysis of some defects we found that the developers had insufficient understanding of the design rules, hence making defects in their code which they didn’t find during reviews. Training them in the design rules helped the lower the number of defects made, and increasing the number of defects found during reviews. Function test found less defects, since most of them were already found earlier. Defect slippage was significantly reduced and the project saved time in function testing, much more time then was invested in the training and reviews.

Another example is to use Fault Slip Through analysis to give significant reduction of defects found by customers, thus lowering maintenance costs. If defects are found by customers, the cost to solve them are much higher compared to when they are found before release. Fault Slip Through has been measured and it was proved that investments in finding defects earlier and preventing defects really paid off. Even reducing defect slippage with just 5% or 10%, which can be reached within months, gave already major savings. And when you also take into account what defects do to the quality reputation of your product, then it becomes even more important to lower defect slippage.

Conclusion

As my first examples show, measuring and managing Fault Slip Through can help you to improve the quality of your products. And it can also be used in incremental projects. In a future article I will describe how to steer product quality in agile teams.

(This blog was posted may 22, 2011, and updated oct 6, 2011: Extended with graphs, and updated jan 27 2013: added agile).
(Visited 2,471 times, 1 visits today)
Tags: , , , , , , , , ,

About BenLinders

I help organizations with effective software development and management practices: continuous improvement, collaboration and communication, and professional development, to deliver business value to customers. Active member of several networks on Agile, Lean and Quality, and a frequent speaker and writer.
Tagged , , , , , , , , , . Bookmark the permalink.

6 Responses to Measuring and Controlling Product Quality

  1. Sankaran Natarajan says:

    Hello Ben,

    I see defect density has one of the important measure which will also help to estimate the defects that need to be uncover in the product or project based on the size/complexity of the product/project.This helps to make sure defects are contained in specific phases before it escapes.

    Ofcourse as you said actual defect density can only be found after we complete the implementation phase but this measure need to be available in order to see the release quality to customer and to estimate the defects for future projects.

    Also by tracking the actual number of defects, phase wise we can arrive at Project defect model for future projects considering the project environment is same.By knowing the factor of variations we can arrive at some defect prediction model using correlation or regression analysis

    Overall a good article on FST

  2. Cirilo Wortel says:

    Hi Ben, I really like the idea! When this is used in an agile context where no bugs are expected to leave the sprints, for measuring the actual quality it is vital to get feedback from production. If the bugs that actually did slip through are analyses a team can really improve.

    • Ben Linders says:

      Exactly. It’s not about the figures, but about defects that customers find, which should have been found earliet. Or shouldn’t be made at all, that where Root Cause Analysis helps: http://www.benlinders.com/2011/business-reason-for-root-cause-analysis/

  3. Beutels Nancy says:

    Although this post is rather “old” it’s still valid. I am currenctly trying to setup a way to measure the number of defects found during each “quality check”. However, there is one thing I find hard to do. How do you measure the number of issues found during code reviews? Especially when the team is working on stories created in a tool (Jira) and add comments to the stories. Unless you want to go over each ticket and count manually the number of defects added as comment. Do you have experience in this matter?

  4. BenLinders says:

    Measuring review defects can be hard indeed. As an alternative you can measure the number of revisions of a code module.

    It’s not a direct replacement, but can be used as a trigger: When a code module changes often you can investigate the causes. It could be that a developer is doing TDD and that modules are changed often. But it can also be that a code module is error prone and perhaps a quality risk.

    For measuring quality in Agile I use a different approach, see steering product quality in agile teams.

  5. Pingback: What Drives Quality: Project Management | Ben Linders

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>