When we think about manufacturing work, we measure labor productivity as the ratio of the output of goods and services to the labor hours devoted to the production of that output, output per hour. (See U.S. Dept of Labor)
Remember the discussion of Project Constraints and Requirements? That’s where I said the project requirements were a tradeoff of how much (feature set), when (time to market) and how good (defect levels). That’s the reason we can’t use output per hour as a software (or any knowledge worker) productivity measurement. Here’s why: If you don’t care how good a deliverable is, I can have it for you almost immediately. It won’t work, but my “productivity” if all you consider is when I say the deliverable is complete, is very high. (Not much time spent, so the ratio is high.)
If we want to start measuring developer or tester productivity, we have to define output per hour (or whatever time period you desire).
For developers, we can measure designs considered, lines of code generated, the number of defects generate along with the code, unit tests generated, unit tests run, how good that output is,and the time it took the developers to generate all that output.
For testers, we can measure test designs considered, number of tests generated, attempted, and run, test logs, defects detected, defects reported, number of measurements provided back to the developers as information,how good that output is, and all the time it took the testers to generate all that output.
Not so simple, eh? If we just knew how to measure how good our work was, we’d have a chance to measure individual producitivity. The more I think about productivity, the more I know it’s a project-by-project thing, not a person-by-person thing. Yes, some people have more effective output than others. And I’m not sure that matters.
If I can figure out how to share some data tomorrow, I will. Most of my data is under non-disclosure, so I have to be careful with what and how I talk about data.