Managing Testing Resources: Five Suggestions for the Project Manager

Copyright © 1998 Cem Kaner ( and Johanna Rothman

Many project managers don't know what to expect from a testing organization. They don't know what the group does, how the product is going to be tested, when things will be done, what deliverables to expect, or how to find this information out. Complicating matters, some test managers want to keep things this way.

We've been successful at managing both development projects and testing groups, mainly for companies that develop and publish packaged software. We see the testing effort as an integral part of the overall process of developing an appropriate quality product on time and within budget. To succeed at this, though, the project manager must be able to see schedules, to receive meaningful deliverables, and to recognize genuine problems, which test groups–like all other software groups–have in plenty.

This article is written for project managers, with suggestions on how to work with a test group and hold them accountable for their work on your project. In particular, we recommend that you:

  1. assess the risks for your project as a whole;
  2. assess the risks associated with the testing sub-project;
  3. lay out criteria for important milestones, and stick to them;
  4. develop a project plan for the testing sub-project; and
  5. track testing progress against the plan.

We are NOT suggesting that you manage the test group. We are not suggesting that you eliminate the intellectual independence of the test group. And we are definitely not suggesting that you should develop these assessments and project plans yourself. What we're saying is that the test group provides services to your project, just like the programming groups do (you track their progress, don't you?), just like the documentation group does, just like the other groups that make pieces of the product that you have to release. You have a responsibility, as the manager of the overall project, to ensure that the services provided to you are effective and timely. To do that, you need to understand what will be done, when it will be done, what can derail it, and how those inevitable problems are to be managed.

Assessing the risks for your project as a whole

More and more project leaders are thinking about risk, how to assess risk and plan for it. We can't address that general problem here, but if you could use some starting references to the literature, we suggest that you look at the Software Program Manager's Network,; the Software Engineering Institute, Risk FAQ,; and the Project Management Institute's Project Management Body of Knowledge,

All that we'll say here is that you have a challenge when building a product, because you have to trade off four factors:

  • time to market
  • cost to market
  • reliability of delivered product
  • feature set.

You can't optimize your project against all of these. The product will probably be late or over budget or unreliable or lacking in features. If you manage well, you get to pick which of these dimensions suffers most, and which is held closely to your initial plan. The risk assessment question is, “What could happen on your project that would increase your time, raise your costs, keep your reliability low, or force you to cut features?” Listing the risks (the “what could happens”) is the first step in managing them.

Assessing the risks associated with the testing sub-project

The testing part of your project plan has serious risks. Some of our favorite risk questions are:

  1. How non-negotiable is the ship date?
  2. Are there fixed dates that must be met for milestones or components of the product?
  3. How likely is it that the test group will get the software on schedule? What are their contingency plans if they get incomplete software, less stable than promised, late? For example, can they add staff (competent staff) late in the project?
  4. What technical areas of the product do the current members of the test group not understand? Can they achieve a sufficient understanding of them in the time available? If not, what is your plan to ensure that those areas will be effectively tested?
  5. Which areas of the program must be well tested? Where can you not afford to cut corners, and why?
  6. Are there regulatory or legal requirements that the product must meet?
  7. Is the project design rooted in customer research? Is there room for legitimate argument about the goodness of design of individual features? If so, how will you and the test group manage the inevitable wave of design suggestions that come from testers who are often more attuned to customer requirements than many software developers?
  8. Some features are so important that shipment will stop if they are not working well. When will the test group do its most powerful pass with these features? Are they planning an intense enough effort? Do they have time to conduct it?
  9. Is your test group focused on improving the quality of the product or on proving that you're stupid?
  10. How attentive to detail and design are your programmers? Can they accept criticism and use it effectively? What has to be done to make them more productive in their dealing with testers? (We measure productivity in terms of the speed with which you get the right product out for the right customer, at the right quality level.)

Discovering and facing issues like these is just one step in running a successful project. You aren't going to solve them just by listing them. And you won't solve them all at the start of the project. (Or, for some of them, ever.) Having them clear, though, will help you understand where to focus your managerial attention, money, and time. If you want to ship the right quality product within budget and on time, then you have to protect your project from the defects, delays and overruns posed by these risks.

You have to work closely with the test manager in assessing these risks. Otherwise, you are thinking in a vacuum. Even if you have years of project management and testing experience, you are working in a vacuum if you aren't working with the people who will have to face the risks that you are trying to manage. You and the test manager may not agree on these risks, on how important they are, or on who should do what to manage them. However, it is very valuable for you to understand each other's assessment and management approach.

Lay out criteria for important milestones, and stick to them

An important tool for managing the project, and your relationship with the test group, is a milestone criteria chart. This lays out criteria for moving the project through different development milestones. How complete is the project supposed to be at a certain point? How should we measure that completeness? These are project-wide issues, and though a good test manager or test lead will gladly help you develop them, these are ultimately your issues to clarify.

Brian Lawrence and Bob Johnson present an excellent set of milestone definitions and checklists at their website, We don't agree with all of their definitions, but that's not the point of their lists. The lists are intended as a starting point and a model. You can customize them for your project. For an alternative set of milestone-specific ideas, see Chapter 12 in Kaner, Falk & Nguyen'sTesting Computer Software. You can probably find additional milestone lists and definitions–none will be perfect for your project, but the mix might help you settle on a specific set of criteria that work for you.

One of us (Johanna) helped a client develop a set of project criteria and is in a position to publish some of the details. We can't reveal the true name of the company or the product. Here, we call it the “Messenger” product. Johanna was retained mid-project to manage Messenger and get it released. Johanna wrote in more detail about “Messenger” in “Achieving Repeatable Processes” in the June 1998Software Development.

Messenger had been in the market already. This upgrade was of particular interest to specific, important customers. We had negotiated specific, firm delivery dates for interim versions of the product. The product had to go beta in April and it had to ship in July. These customers' acceptance of the final version of the new product would depend on our meeting of these dates with the agreed features at the agreed level of reliability.

As part of our planning, we developed the criteria listed in Table 1. (Note: these are fairly generic criteria. There were some very specific other ones, too, but we're not in a legal position that lets us tell you about those. In your project, you will include items that are more specific than these.)

Table 1: Messenger's milestone criteria

System Test Entry Criteria Beta Criteria Release Criteria
  • All code must compile and build for all platforms.
  • All developer tests must pass.
  • The code is frozen.
  • All features except tokens are complete.
  • All features complete in code, with developer tests.
  • All code must compile and build for all platforms.
  • All developer tests must pass.
  • All available tests for beta customer must run and pass.
  • All current bugs are entered into the bug-tracking system.
  • First draft documentation is available and shippable to customers.
  • The code is frozen.
  • Technical support training plan is in place, and the people are in place.
  • There are less than 36 open high priority bugs.
  • Ship Beta before April 30.
  • All code must compile and build for all platforms.
  • Zero high priority bugs.
  • Document workarounds for all open bugs in release notes.
  • All planned SQA tests run, > 90 percent pass.
  • Number of open bugs decreasing for last three weeks.
  • All Beta site reports obtained and evaluation documented.
  • Reliability criterion: Simulate one week of usage …
  • Final documentation ready to print.
  • A working demo runs on previous release.
  • (Performance criterion)
  • At least two Beta site references.
  • Ship release before July 15.

Note that the testing group is vitally interested in these criteria, but the ultimate decision-maker for them was me (Johanna) because I was the project manager. The testing group helped me understand whether the project was meeting the criteria, they helped me understand how to word the criteria and what else should be included. They played an important role. But the list was mine.

Looking at Messenger's product release criteria, we see a focus on getting to the ship date with a certain set of features, but not a particularly low defect rate. More precisely, several classes of defects didn't matter (as far as the executives and the market researchers were concerned). The overall project reliability had to be understood and reasonable. Specific aspects of the product had to be immaculate. Errors in those areas became high priority quickly. Additionally, anything that genuinely interfered with early use of the product became high priority. Getting ourselves clear on these expectations was essential for successful testing. How could a test group provide support for these very specific objectives if they didn't understand them?

Messenger's system test entry criteria were chosen as the minimum set that would allow system test to start, even if not all the features were complete. Allowing testers to start early meant that they could find problems early. Every study that we (Johanna and Cem) have seen says that the sooner that you find and fix bugs, the cheaper. We wanted every opportunity to make our dates, and that meant getting good information as soon as possible.

The project was a success. We released the beta version two weeks late. We might have released an earlier version, that had been a beta candidate, but it didn't meet our beta criteria. However, we were able to explain our criteria to our customers who had insisted on beta copies, and to explain our status against those criteria pretty precisely. This level of control made them comfortable and they accepted the late release without protest. We also had disputes with the testing group, who wanted to continue testing in areas that we had made clear were not relevant to the customers' acceptance criteria. The clarity of the written, approved beta and release criteria went a long way toward keeping those difficult discussions focused and within sensible time bounds. We shipped the product on time and the key customers were extremely happy with it.

Developing a project plan for the testing sub-project

To determine whether the project has met the criteria (which is Johanna's view of the testing task) or to prove that the product has not met the criteria (Cem's view of the task), the test group has a lot of work to do, probably involving more than one person, over a period of weeks or months (or years). Somehow, they have to be able to tell what is supposed to be done, what they've actually gotten done so far, what's left to do, and when they have achieved a goal or met a milestone.

Different test managers handle the planning problem differently. Forget about the incompetents and the turf-hoarders and the blame-it-all-on-you experts. Competent, cooperative test managers differ significantly in how they schedule and in how they communicate their schedule to the rest of the company. We've had success with variants of a method written up by Kaner and we recommend it to testing groups (“Negotiating Testing Resources: A Collaborative Approach.” Proceedings of the International Quality Week Conference, San Francisco, 1996. Available at However, you can't impose a method on a good test manager (you can try, but the next test manager might reject your method too, after the first one quits.) You can ask for clear communication.

What should be in the plan

Here are some of the things that we think it is fair to request (and expect) in the testing project plan:

First, a clear statement about the minimum level of testing that will be done for every area of the product (program plus documentation plus marketing materials plus associated hardware). Some areas will get more than this, but none will get less. What is that minimum? Do you think it is enough? Too much?

Second, a description of different levels of depth of testing that will be used in different areas of the program. For example, we often think in terms of four categories:

  • Mainstream or Normal Use testing works the product gently. The tester tries out the various options but is not intentionally using extreme values to break the product. In mass market software, this level includes a complete verification of the program against the user manual. (For a discussion of the need for documentation testing in mass-market products, see Kaner, “Liability for Defective Documentation”, Software QA Quarterly, volume 2, #3, p. 8., 1995. Available at; Kaner & Pels, “User Documentation Testing: Ignore At Your Own Risk”, Customer Care, volume 7, #4, p. 7-8, 1996).
  • Guerilla testing involves ad hoc testing done by someone who is skilled at finding errors on the fly. It is one person's best shot at finding bugs. This approach is typically time-limited. For example, to say that an area will be guerilla-level tested, you might mean that this area of the program will receive a total of two days of ad hoc testing, spread across the project. Normally, guerilla testing is done after (not instead of) mainstream testing.
  • Formally planned testing involves carefully thought through test cases that are intended to thoroughly test that area. Depending on your company's philosophy of testing, this might mean a set of test cases that collectively trace back to (check) every item in a list of specifications. Or it might mean a set of test cases that cover all of the extreme values and difficult options and uses of the program (or other product component) in this area. Or something else. It is harsh testing, intended to expose those problems that a customer would find if the area were not tested and fixed.
  • Planned regression testing involves carefully thought through test cases that are run frequently, perhaps every build or every few builds. They are designed to recheck an area of the product that was well tested, to determine whether it is still as stable as it was previously. Developing this test suite takes much longer than the development of the first good plan for testing an area. Here you are selecting fewer tests (or automating many of them), searching hard for efficiencies and for test cases that would be particularly revealing of side effect problems.

Your test manager might use different names and different descriptions. There's nothing magic about ours. You just need some descriptions that are clear, clearly different, and that cover the options that the test group will actually use.

Third, a list of the areas of the product. The test group will define the “areas” in its own way. You might help them with this or not. They have to be free to organize their work in a way that works for them, which might be different from how you would do it. However, you should be able to get from them a list of areas that together include all of the things that the testers will test.

Fourth, for each area of the product, a list of sub-areas or sub-tasks. This list should be detailed. It should include anything that takes a day (or even half a day) of work. Having things broken down this finely makes it possible to accurately estimate the size of the task and to accurately track how much work is remaining to be done. You might not be able to get this list–some people refuse to estimate tasks this finely, dividing the project into two week phases instead. We would want more than that, but you might not be able to force it.

Fifth, for every sub-area of the product, the list should specify how much time it will take to test that sub-area at the level of depth that it will be tested.

Sixth, a total across sub-areas. For every major area of the product, how much time will be spent testing, and, overall, what is the level of testing of this area?

This list of areas and sub-areas gives you something to review, to negotiate, and to track progress against. If the test group wants to spend too much time on one area, you can ask why they intend to test this area's sub-areas at the levels they have chosen. What tradeoffs are they making? As you come to understand the test group's tradeoffs, you might decide that they really do need more time and money. Or you might persuade them to test some areas less intensely (with your support). Or you might come away with a well-understood disagreement. In any case, the level of detail of the list is what makes possible the calm, task-oriented (rather than pointy-haired-manager-wants-to-save-money) discussion that safeguards the quality of the product by focusing the most resources on the most important tasks.

Don't make these scheduling mistakes

Project managers sometimes react with shock when they see an honest estimate of the time needed to do some testing tasks. If the number is too big, you have to manage it. But don't make these common mistakes, which will bite you in the tush later.

  • Don't pressure people to promise more testing in less time. They can't. Instead, cut time by cutting tasks or by helping people become more efficient.
  • Don’t build expectations of (unpaid) overtime into your scheduling. Testers work overtime voluntarily, to make up for lost time or add creativity or depth to their work in order to meet their own professional standards. This is important flexibility, for them and for the project. Don’t make them give it up or people will burn out and/or quit.
  • Don't forget to allow time for vacations, sickness, and holidays.
  • Don't underestimate time spent on administration, staff development, and other non-testing tasks. Assume that people attend meetings, spend time on reviews, help people on other projects, write testing project plans (this isn't free, you know) and so on. Stick with realistic estimates of this overhead.
  • Don't expect the testing task list to be complete, even if it is detailed. There will always be late surprises and unexpected complications. Allow a fudge factor in your overhead estimate for this.
  • Don't bet that this is the last version of the schedule. Plan when to iterate the test schedule.

Track testing progress

If the test team has a detailed list of tasks and an estimate of how long each one should take, then every week, they can report progress against this. For each area of testing, how much time was spent and how close is it to completion? Are they spending more time per task than they expected? Because the areas are broken down into specific tasks that don't take many days each, everyone can tell when specific things are getting finished. You are less likely to see wild overestimates of progress, followed by long unexpected schedule slips.

It takes time to create this list and it takes time to track progress against it. The testing budget must allow time for this task or it won't be done.

Not only should the test team review their progress every week, the project team can review the relevant milestone criteria every week at the project team meeting. Before system test, review the system test entry criteria, to see if the project is ready for system test. Before Beta, review the progress made against the Beta criteria, and then review the project status against the ship criteria.


As a project manager, you don't have to know how to test and you probably don't have the right (or reason) to micromanage the testing group. But this group does owe you services that are essential for the success of your project. You can and should hold the group accountable for those services (their quality and schedule) without interfering with the group's work.

[This paper was originally published in PMI ISSIGreview, Volume 10, Number 4.]


Like this article? See the other articles. Or, look at my workshops, so you can see how to use advice like this where you work.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.