Oh dear. I was not sufficiently articulate in my last post. Both Frank and David in their comments asked about capacity, the output of the organization over time. That will teach me to post when I'm tired. (Maybe.) Let me try this again.
In each of these projects, senior management wanted more features than they received. Unsurprisingly (to me at least), the more features requested and the longer the project, the less the development staff (developers, testers, writers) could deliver. (Longer projects have more requirements churn than shorter projects, if only because the world keeps changing.)
|Phase/ Project||# of feature requests||Requirements/
Design elapsed time
|# requirements desired||Implementation/ Integration/
Informal test elapsed time
|# requirements implemented||Final Test||Total Duration|
|Project 1||125||38 weeks||450||31 weeks||250||5 weeks||74 weeks|
|Project 2||50||24 weeks||200||12 weeks||150||4 weeks||40 weeks|
|Project 3||3||2 weeks||12||1.5 weeks||9||3 days||4 weeks|
The feature requests aren't real requirements, they're ambiguous placeholders for the real requirements, such as “electronic signature” or “improve speed.” But that's what the organization has available at the start of the project. By the end of the requirements/design phase, they have real requirements, counted in the way the organization counts. This number is the number management expects to get out of the release. But there's one more problem: management has set the release date. In fact, management set the release date back at the beginning of the requirements/design phase, without any estimation input from the development staff. So now, management has fixed the number of people available to work on the project, the time for the project, and the number of features it wants. The project has only one clear degree of freedom: the number of defects the project will deliver along with its features.
But, there's an implicit degree of freedom here, that of features. Even though management claims they want all the desired features, history proves they will accept fewer features. The development organization counts on that, because the release date is set before the requirements are defined. Without firm requirements, it's impossible to estimate the time necessary. Without estimates, each project essentially throws the dice, to see if there's any way the project can meet its desired commitments.
The problem the organization has is in the difference between the requirements desired and requirements implemented. During the implementation phase, the developers realize that they don't have sufficient time to implement some of the requirements as they stand.
Management was convinced it was the testers who prevented them from obtaining all the features they wanted. A senior manager said, “If we shorten the testing time, we can spend more time in development and get the features we want.” As you can tell from the data, they could have done away with testing and still not been able to get the features unless they changed the requirements and design activities. The problem with all of these projects is the inability to adequately define requirements quickly and completely enough for the developers to implement the requirements and the testers to verify them.
There are tons of reasons why the requirements definition phase can take a long time. In his comment, Frank mentioned multi-projecting. Another reason is maintenance from a previous release can take away developer time from requirements definition work. Sometimes the product managers don't agree with each other on what their feature requests mean. Sometimes the analysts and product managers aren't able to unambiguously define the requirements, so the developers end up making decisions or changing the requirements during implementation.
In this case, optimizing the project so that the testers could finish faster was the wrong approach. They have at least these choices: making the projects shorter, so the requirements/design phase is shorter; changing the way they define requirements and design; use a different lifecycle so they can continuously produce mini-releases; using estimates based on the requirements to estimate release date. There are probably more options.
What is clear to me, is that as Frank and David pointed out in their comments, the issue is the organization's capacity, not the capacity of one group. Management can try to ask for more than the development organization can deliver, but unless the development organization changes how it works, it can't. Lopping off the testing (or for that matter requirements, or any other phase), or optimizing for one phase doesn't change the organization's capacity (output over time). The only thing that changes the organization's capacity is changing how people work (their practices and lifecycle).
I appreciate Frank and David for asking me what the heck I was thinking 🙂 If I'm still not making sense, please let me know.