Why Defects/KLOC Doesn't Supply Enough Information about Product Quality

 

A colleague emailed me a few days ago, and asked “for a code base with a [given size], what
can we expect to see for numbers of defects per KLOC (given the actual industry average or given what the industry believes we should expect). We need some way of gauging whether or not our defect rates fall within the industry standard, or if we are better or worse than the industry standard.”

The question of “Are we producing code with fewer, about the same, or more defects than industry standard?” is a reasonable question. Unfortunately, I don't think it's a particularly helpful question.

I object to defects/kloc (defects per thousand lines of code) for these reasons:

  • Defects/kloc treat all defects equally. So if you have developers who went to great lengths to make all the code solid but the writers didn't have enough time to bullet-proof the help, the defects/kloc number is misleading. Or, if the developers prevented a whole bunch of serious errors but missed a bunch of not-so-serious errors the numbers look the same as if the developers missed errors over the whole project.
  • Defects/kloc change over the course of the project. Depending on the practices the developers use, they will find defects at a different rate. If the developers are using inspection or peer review or agile practices, they will find many more defects at the beginning of the project. If they aren't using any of these practices, they will find a great number of defects at the end of the project. If the project stops testing during the hockey stick of finding problems, the total defects part of the equation is wrong.
  • Defects/kloc assume that there is an “average” consequence to each defect. Each defect is unique, and sometimes, it's the sum of a bunch of non-related defects that matter to the customer's experience of the product.

Ok, so what do I recommend instead of defects/kloc? If you must measure defects as part of your measure of how good the product is, measure the defect escape rate post release. At three months, six months, nine months, one year, and on at three-month intervals, count the number of defects that your customers found that you didn't know about. That's the numerator. The denominator is the total number of defects found (including these new ones). The better your perceived quality, the smaller the defect escape rate. The worse your perceived quality, the higher your defect escape rate. If the customers don't find the defects, then they don't matter to the customers. Those defects still may matter to you, but they don't affect the customer's experience.

To me, defects/kloc is something to measure when you want to see if your process is catching defects and dealing with them early. Snapshot code growth and defects/kloc weekly, compare the numbers each week, and you have some useful information you can use during the project to adjust course. But don't use defects/kloc to reward or punish developers.

High or low defects/kloc is not indicative of how good the developers are; it indicates some sort of process problem or success (or coverup). The process is almost never a developer problem. Management decides where to spend the money. If the developers have a too-high defect rate, it's almost always because management has overconstrained the problem so that the developers feel that they have to take shortcuts. To me, defects/kloc is a way to blame developers for inadequate management. That's why I feel so strongly about it.

If you want to know about product quality, measure all six sides of the product quality equation. That will tell you about product quality more readily than defects/kloc will.

6 Replies to “Why Defects/KLOC Doesn't Supply Enough Information about Product Quality”

  1. Hi Johanna,
    Love the article. The link for “six sides of the product quality equation” is broken, do you know where it is? And do you have a post which details pragmatic and engaging Software Quality metrics/targets for a Scrum/Agile environment?

    1. HI Giles. Right now, today, I believe all of my image links are broken. Working on that. Might take some time. Sigh.

      A couple of things about metrics for agile:

      1. Scrum is not all of agile. You can be agile without using Scrum. And, If you are using lean, you want to think about metrics for lean. Just to be clear!
      2. In Manage It!, I have a number of ideas for metrics, whether you use Scrum or not.
      3. In Manage Your Project Portfolio, I have more ideas for metrics, for agile, lean, and incremental projects.

      Did you see this post for programs: Measurements That Might Mean Something to a Program?

      It all depends on what you need to know. Are you looking for cycle time? For defect escape rate? For cost to fix a defect? For time per feature throughput? See why I say it all depends? I’m not trying to be squirrelly. It really does depend on your needs.

      Here’s some free advice: Reduce the size of your features. Reduce the length of your iterations. Make sure you integrate all the time and make sure you have a cross-functional team. Now, I bet you need to measure something different.

  2. Thank you Johanna. I appreciate the post, and now I have some extra reading to do 🙂

    Having read this study (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.19.6603&rep=rep1&type=pdf), I’m going to drive some metrics based of customer satisfaction and importance.

    The primary goal is to improve software quality and improve team performance by reducing defects they have to fight with (through refactoring, redesign, simplification). But do it in such a way that the focus is driven by how unsatisfied customers are with quality aspects of the software.

    We have 1.4m lines of code. Less features, yes. please.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.