In Part 1, I discussed the agile project system. In Part 2, I discussed the tester's job in agile. In Part 3, I discussed expectations about documentation (which is what the original question was on Twitter). In this part, I'll talk about how you “measure” testers.
I see a ton of strange measurement when it comes to measuring the value of testers. I've seen these measurements:
- How many bugs did a single tester report?
- How many times did a tester say, “This isn't any good. I'm sending it back.”
- How many test cases did a tester develop?
All of these measures are harmful. They are also surrogate measures.
The first rule of measurement is:
Measure what you want to see.
Anything other than what you want to see is a surrogate measurement.
Do you want to see many bug reports? (Notice I did not say defects. I said bug reports.) If you measure the number of bug reports, you will get that. You might not get working software, but you'll get bug reports. (Rant on: Bug reports might not report unique problems, defects, in the product. Rant off)
Do you want testers to pass judgment on the code? Ask how many times they threw something back “over the wall” or rejected the product (or the build).
Do you want to measure test cases? You'll get a large number. You might have terrible code coverage or even scenario coverage, but you'll get test cases.
In waterfall or phase-gate, you might have measured those surrogate measures, because you could not see working product until very late in the project.
In agile, we want to see running tested features, working product. Why not measure that?
Running tested features provide us a possibility of other measures:
- Cycle time: how long it takes for a feature to get through the team.
- Velocity: how many features we can finish over a time period.
- If we look at a kanban board, we can see the flow through the team. That allows us to see where we have blockers for the team. What's queued for test?
- When can we see working software? If we only have running tested features every week or so, we can see new working software only that often. Is that often enough?
- What is the team happiness? Is the team working together, making progress together?
You “measure” the team, looking for throughput. If the team doesn't have throughput, do some root cause analysis and impediment removal. That's because we have a team approach to product development. (See Part 1.)
Back in the 80's and early 90's, we learned we had a software “crisis.” Our systems got more complex. We, as developers, could not test what we created. The testing industry was born.
Some people thought testing software was like testing manufacturing. In manufacturing, you duplicate (I apologize to manufacturing people, this is a simplification) the same widget every time. It's possible in manufacturing to separate the widget design and prototyping from widget manufacturing. The SEI and the CMM/CMMI used the metaphors of manufacturing when they described software development. We emphasized process before (remember structured design and CASE tools?), but now—wow—process was everything.
Software product development is nothing like manufacturing. It's a lot more like the design and engineering of the widget, where we learn. We learn as we develop and test code.
That's the point of agile. We can incorporate learning, to improve and better the product as we proceed.
If you measure people as if they are widgets, they will behave like widgets. If you measure people as individuals, they may well behave as individuals, maximizing their own returns.
When a team uses agile, they create working product. Look for and measure that. When a team uses agile, they work as a team. Reward them as a team.
Update: The entire series: