Managing Defects by Severity and Frequency

 

I’m familiar with managing defects by severity (how bad the problem is for the user if the user encounters the problem), and by priority (what’s the business value of fixing this problem), but I had lunch yesterday with some folks who use frequency of occurrence to also manage defects.

They started this because they have a huge customer base, and if enough people perceive there’s a problem with the software, there is a problem. And, they include downstream customers (i.e. people in the building who may be customers even if they don’t pay for the software) as part of the users.

Here are their definitions:

1 – High: Encountered by many users, including downstream teams, in their normal course of work (> 10% of the user community or > 100 individual users)

2 – Medium: Encountered by some users, including downstream teams, in their normal course of work

3 – Low: Encountered by few users, including downstream teams, in their normal course of work (< 1% of the user community and < 10 individual users)

These weren’t easy definitions to settle on. Their product has substantial internal computation and a substantial GUI component. These definitions appear to “punish” the teams responsible for the GUI, while the internal computation teams seem to have more flexibility. (But what they’ve realized is they need to modify the way they define and implement the GUI. The piece I particularly like is that the product architecture folks have everyone (the rest of the developers and testers as wel as customers) as part of the customer base.

I don’t think this works for everyone, but I like the idea of saying, “Who’s affected by this problem? If there are lots of them, let’s deal with it.”

4 Replies to “Managing Defects by Severity and Frequency”

  1. In our organization, the priority of a bug is king. This value is set by the change control board (which handles defects and change reuqests) and the developers fix accordingly. Work planning in other words.
    Severity is something that we testers really don’t know what to do with. Now most bugs are “normal” and then a few are above or below signalling how severe we think it is. BUT – how much information should we use when setting the severity. We know things about business value etc and when using that we may approach the priority decision instead.
    This frequency factor is interesting and something we use when setting the priority of test cases. Maybe there is a good way to use it for defect management as well but it is not easy…

  2. The frequency analysis sounds like a simplified version of a Pareto Analysis, also known as the 80-20 rule. This says that 20 percent of your problems will result in 80 percent of the occurrences of problem tickets. This analysis is fairly common and should be included in a prioritization system.

  3. I just found your blog and this is the first post that caught my eye. Frequency is an interesting measurement for the overall importance of a bug, it’s definitely something I take into consideration.
    I see its effectiveness in combination with severity (as you’ve defined it). For example, there may be a lower severity bug but its frequency starts to raise the overall importance of fixing that bug. Its measuring the difference in importance to your business between 1 customer that’s work is interrupted due to a bug, versus 50 customers that are annoyed by a bug (but that bug isn’t stopping their work).
    Frequency of a bug likely means more support calls/emails that you need to handle, and more time/money spent handling customers. Suddendly that small, frequently occuring bug is a much more serious issue.

Leave a Reply