I’m familiar with managing defects by severity (how bad the problem is for the user if the user encounters the problem), and by priority (what’s the business value of fixing this problem), but I had lunch yesterday with some folks who use frequency of occurrence to also manage defects.
They started this because they have a huge customer base, and if enough people perceive there’s a problem with the software, there is a problem. And, they include downstream customers (i.e. people in the building who may be customers even if they don’t pay for the software) as part of the users.
Here are their definitions:
1 – High: Encountered by many users, including downstream teams, in their normal course of work (> 10% of the user community or > 100 individual users)
2 – Medium: Encountered by some users, including downstream teams, in their normal course of work
3 – Low: Encountered by few users, including downstream teams, in their normal course of work (< 1% of the user community and < 10 individual users)
These weren’t easy definitions to settle on. Their product has substantial internal computation and a substantial GUI component. These definitions appear to “punish” the teams responsible for the GUI, while the internal computation teams seem to have more flexibility. (But what they’ve realized is they need to modify the way they define and implement the GUI. The piece I particularly like is that the product architecture folks have everyone (the rest of the developers and testers as wel as customers) as part of the customer base.
I don’t think this works for everyone, but I like the idea of saying, “Who’s affected by this problem? If there are lots of them, let’s deal with it.”