How Long Should the Testing Take?

 

When I do assessments, invariably someone says to me, “JR, how long should the testing really take? It takes our testers (4/5/6/30 insert your number of choice here) weeks, and we need it to take less time. Why can’t it take less time and how can we tell what’s going on so we know how much testing we need?”

This is, of course, an “it depends” answer. I’ll try to untangle this.

If you do test-driven development, then there is rarely a lot of extra testing. When I coach teams who are transitioning to test-driven development, I recommend they plan for as much as one iteration at the end for final system test and customer acceptance test. Once the team has more experience with test-driven development, they can plan better. I have found a need for some amount of customer acceptance testing at the end. And that time varies by project and how involved the customers were all along.

If you’re using an agile lifecycle even without test-driven development, I recommend starting with one iteration’s length of final system testing. I am assuming that the developers are fixing problems and refactoring as necessary during an iteration — real agile development, not code-and-fix masquerading as agile.

If you’re using any of:

  • an incremental lifecycle such as staged delivery where you plan for interim releases (even if the releases aren’t to the customers)
  • an iterative lifecycle such as spiral or evolutionary delivery
  • a serial lifecycle such as phase gate or waterfall

I’d expect as much testing time as development time — but it doesn’t have to come all at the end as final system test as it looks like in the pictures. Any proactive work you do, such as reviews, inspections, working in pairs, unit testing, integration testing, building every night with a smoke test, fixing problems as early as they are detected, can all decrease the duration of final system test. If you’re the project manager, ask the developers what steps they are taking to reduce the number of defects they create. If you’re the functional manager, work with the project manager and the developers to create a set of proactive defect-prevention practices that make sense for your project.

If you’re the test manager, recognize that final system test includes several steps: testing the product, finding defects, and verifying defects. Your first step is to separate these tasks when you estimate the duration of final system test.

One question you should be able to answer is: How long will one cycle of “complete” testing take? We all know we can’t do complete testing, so your version of complete is the tests you planned to run and any other exploratory tests or other tests you need to run in one cycle of testing to provide enough information about the product under test. I realize that’s vague and depends on each project. I don’t know how to be more explicit, because this is a project-by-project estimate. If you work with good developers, the cycle time can decrease a bit from the first cycle to the last — because the testers know better how to set up the tests and the product has fewer defects, allowing the testing to proceed faster.

Once you know how long a cycle of testing takes, estimate how long it will take the developers to fix the problems. I use this data: the number of problems found per cycle in the last project, my gut feel for how many more/less we should find per cycle in this project, bug-tracking system data telling me the average cost to fix a defect pre-release. If I knew that the first cycle in the last release found 200 problems, and it took the developers half a person-day each to fix problems and I have 10 developers, I estimate 10 working days to fix problems. That’s a long time. And yes, I was on a project where that’s what it took. It was agonizing — we thought we’d never be done fixing problems.

Now you know how long a cycle takes, how long it will take for fixes after the first cycle. How many cycles of testing do you plan for? When I set up projects, I tend to use some proactive defect detection and prevention techniques, so I generally plan on three cycles of testing. In a casual conversation with Cem Kaner a number of years ago, he said he planned on eight cycles of testing. For one project where I was a contract test manager, we had almost 30 cycles of testing. I can’t tell you how many cycles of testing you’ll need because that depends on the product’s complexity, how good your tests are, and how many defects the product starts with.

Here’s what I have noticed from my work with multiple organizations. The groups who want to decrease testing time tend to perform the least proactive work reducing the overall number of defects, and they typically perform primarily manual black box testing at the end of a project. I understand their desires, but they’ve set up their lifecycle and processes to produce exactly the wrong result.

The best guess for an estimate I have is to estimate the number of cycles you’ll need for testing, the duration of one cycle, the time it takes for developers to fix problems between cycles. Add up the testing plus fixing/cycle and multiply by the number of cycles you think you’ll need, and you’ll have an estimate of the testing time needed. BTW, when I do this, I never give a single number, I always give time per cycle, my estimate of the number of cycles, and my estimate of defect-fixing and verification time. That way I can show the costs of not performing proactive defect prevention in a nice way. And I can show my uncertainty in my estimation.

If you want to reduce testing time, create a low-defect product, test all the way through with a variety of test techniques, and use iterations, so you know at the end of 2-4 weeks where you stand with defects. The lower the number of defects and the more sophisticated your tests, the faster your testing time will be.


If you use bloglet to subscribe, my apologies. Sometime in the past week, bloglet got stuck and couldn’t generate the email for this blog. I believe I’ve fixed it. And I’ll keep checking.

Leave a Reply