© 1998 Johanna Rothman.
Abstract
Organizations produce and buy packaged software to save time and money. Often, the producers feel that savings will result from diminished product testing, but that wish may simply mean the software test professionals (STPs) won't be given enough time to test and assess a software product as thoroughly as they know is necessary.
Not everyone agrees with us STPs about what is necessary. Even if we only plan what we think is the minimum necessary, we are frequently frustrated by our inability to accomplish all the planned tasks. And just when we think we can get everything done by the end of the project, others can change the requirements or priorities.
One way to deal with the endless supply of testing work for packaged software is to develop product shipment criteria and prioritize the testing process. This paper presents a way to do this, using real world examples to show how some organizations chose what to test and assess.
Introduction
I have found this approach useful for defining and managing test priorities:
- Develop a test strategy.
- Define product quality and milestone criteria.
- Assess the product against the milestone criteria.
Develop a Test Strategy
The first step to managing the test work is to define a test strategy. The strategy helps you decide what has to get done, and when. The strategy addresses:
- What functionality will and will not be tested and how the testing will be performed. Is installation, performance, or load testing also required for this product?
SmartStore's flagship product was a fast storage and access product. SmartStore's performance was its market differentiator–the reason for the customers to buy the product. SmartStore's STPs had very specific plans for how to test performance, and what kinds of performance to test. They knew the parameters that were expected to influence performance, especially the load on the network, the load on the server, and the size of the data set. The STPs defined six test scenarios that represented the performance they were willing to test on an ongoing basis, and then created automated tests to be able to run those tests for every build. Not all functionality was tested under load. They chose representative functionality, based on how SmartStore's four largest customers used the product.
- What priorities will be applied to the testing work? In what order will the features be tested?
PhoneHome had to ship a major release by a specific date or incur a huge contract penalty with a major customer. The PhoneHome test group had specified which tests would be run in which order. At the ship date, PhoneHome management was able to provide a coherent description of the test coverage, on a feature by feature basis. The customer agreed not to use the untested features. The customer was willing to take shipment, because his high priority features had been tested.
- The criteria applied to the product at different milestones, such as entry into system test, Beta, and Release. What does “Ready” mean for this product at these milestones?
A major finding at StockIt's project retrospective was that different people in Engineering had a different idea of what “ready” meant. Some developers thought the software was ready if it compiled. Some testers thought the product was ready when no more bugs could be found. Some managers thought it was ready when the ship date arrived. The project manager and the chief architect had their own definition of “ready” that was less one-sided. Unfortunately, the project manager and chief architect had not explained their criteria for each milestone. Without agreement on “ready” at the different milestones, the organization was in continuous disagreement about the state of the product and its readiness for testing and shipment.
The test strategy describes explicit choices of what will be tested, in what priority, and how to know when the product is shippable. The test strategy also tells you what you need to measure to assess project state and the project's ship-readiness.
This paper will not address part of the first component of the test strategy: how to test functionality. For excellent ideas on functionality testing see Kaner [4].
Define Product Quality and Milestone Criteria
For software products, testing priorities are the result of project goals. Market forces make this even more obvious for packaged software. Every project has its own goals. To determine the appropriate testing priorities for a project, you must first define and understand your project goals. The end goal in packaged projects is to make money for the company. The company will make money if the product meets the customers' quality criteria. Your project and testing goals must reflect your project's quality criteria.
The three possible project goals for any given software project are [3]:
- Minimize time to market, by delivering the release as soon as possible.
- Maximize customer satisfaction, by delivering a specific set of features.
- Minimize number of known and shipped defects.
In my experience, a project can have one and only one of these goals as its top priority. The project manager and the test manager must address the other two goals within that context. If the test manager accepts and understands this prioritization, she can actively decide to shape the testing process to achieve the goal. As the project proceeds, the STPs can make tradeoffs consistent with the priorities. If the test manager or the project manager refuse to accept that only one of these goals can be paramount, the project is not likely to make any of its goals.
When you start a project, you have to pick one of these three as the predominant goal of the release. Choose which aspect of quality is most valuable to your customers. The project manager and the test manager then make trade-off decisions about the other two possible goals within the context of the primary goal.
Choose Product Quality Criteria
Product quality criteria help you choose what functionality you need to test. The quality criteria are the definition of what the customers want in a product: what defines the product as “good enough” to ship [2].
One way to choose the right quality priorities for a project is to use Geoffrey Moore's high-tech marketing model [5] to understand the market imperatives. Table 1 provides a view, based on his model, of the interaction between the definition of quality and the product lifetime. For more information, see [7]. The users (and potential users) have different perceptions of product quality over the lifetime of a product. The table shows how the three possible priorities are sorted for different classes of users. As a product evolves, it moves from one target user group to another, and (in the model) the priorities shift accordingly.
Table 1: Quality Perceptions over Product Lifetime
| Product Life/ Market Pressure | Introduction (Enthusiasts) |
Early Adopters (Visionaries) |
Mainstream (Pragmatists) |
Late Majority |
Skeptics (Laggards) |
| Time to Market | High | High | Medium | Low | Low |
| Feature Set | Low | Medium | Low | Medium* | Medium |
| Low Defects | Medium | Low | High | High | High |
*Medium pressure for features does not mean pressure for new features. It means that the promised features must be in the product and must be working.
To define product quality, choose which priority is most valuable to your customers. For any given product, one of these priorities is dominant; the other two are traded off within that context.
Some people are not easily convinced that there is only one top priority. Here is a check to verify that choice: If it is three weeks before the scheduled ship date, and the defect count is higher than the developers prefer, and the last feature is not quite implemented, what does management choose to do? If management chooses to ship anyway, then time to market is the top priority. If management chooses to ship as soon as that feature is installed, then customer satisfaction through features is the top priority. If management chooses to hold shipment in order to fix the defects, then low defects is the primary definition of quality.
One way to think about quality criteria is to ask what your customers value [8] the most in this product: What will make them decide to buy or not buy this product? Your customers' primary buying choice–value, their definition of quality–is what they value for this release.
Once you've decided on the prioritization of shipment, features, and low defects, you can create specific and measurable quality criteria at different milestones for this project.
Define milestone criteria
Packaged software producers typically assume their highest priority is time to market. When time to market is a dominant concern to either the project team or corporate management, I recommend generating different milestone criteria to assess the product against at the project milestones. I have had good success generating criteria for these milestones: system test entry, Beta ship and product shipment.
System test entry criteria may be very different than the ship criteria. Especially for packaged software, many producers and customers expect that the state of the system going into test or Beta is not the same as the final shippable system.
Speedy's product is in the Mainstream: their customers have low tolerance for defects, the customers do not request new features, and the company has the internal pressure (based on hardware improvements) to ship a new release at least once a year. A recent release to enable new and faster hardware had the following system test and product shipment criteria:
Table 2: Speedy's system test entry and product shipment criteria
| System test entry criteria | Product shipment criteria |
|
|
The criteria for starting the system test are not as demanding as the criteria for product shipment. The system test criteria assume that the product going into system test is at least as good as the previous release, for the regression test. The ship criteria verify that new functionality works as fast as the old functionality did.
Speedy did not have a Beta test, so there were no Beta criteria.
The project team defined these criteria by thinking about what was important to Speedy and its customers. For Speedy, product performance is not just a feature; lack of performance is a significant defect. Since low defects were the product's major definition of quality, many of the ship criteria were based on defects. In Speedy's case, they were willing to postpone final product shipment to meet the performance criteria and other low defect product shipment criteria.
Messenger's product is in the Early Adopter phase. Messenger had a small number of customers with money to pay for product development, because the customers had a huge need that this product promised to fulfill. The product had to meet its promised delivery date, and it had to have the features for the customer to prove to his senior management that this product was the right tool for the job. There were two overriding ship criteria: the product had to go to Beta in April and ship in July.
Table 3: Messenger's system test entry and product shipment criteria
| System Test entry criteria | Beta Criteria | Release Criteria |
|
|
|
Messenger's product shipment criteria focus on getting to the ship date with reasonable software, but not particularly low defect software. The features were concentrated in the performance and reliability areas in order to make the customers successful when deploying applications.
Messenger's system test criteria were chosen as the minimum set that would allow system test to start, even if not all the features were complete (tokens). The Beta criteria were chosen to be able to track progress on the release, and for the minimum complete feature set. The ship criteria were solely selected on the minimum Messenger thought the customers would accept, given Messenger's aggressive schedule.
ExtendIt's product is in the Early Adopter phase. Since their market has not jelled, they have a number of different customers with different ideas of what is important. ExtendIt has a quarterly release cycle, small, incremental feature releases, to accommodate those varying needs. They need to meet their quarterly release date. If they miss a release, the follow-on development project is stressed. However, not all the features need to be in each release. The product needs to be usable, but low defects is not a specific goal.
Table 4: ExtendIt System Test, Beta, and Release Criteria
| System Test entry criteria | Beta Criteria | Release Criteria |
|
|
|
ExtendIt's system test entry criteria were chosen as the initial test “Will this release fly?” As long as the necessary features were in and presumably working, then ExtendIt was willing to start system test. The Beta criteria were focused on getting some usable features for enough Beta customers. The Release criteria verified that the customers liked the features, and that the features were usable.
ViewIt needed to focus a new release on a small set of features with very low defects. Their customers were in the Late Majority, very conservative. Customers had previously bought other ViewIt systems, and were happy with those products. The customers needed increased accuracy and reliability, so they needed a new release of the system.
ViewIt's milestone criteria were focused on low defects. The project plan specified the few new features required. The specific ship date was not critical, but the release had to be shipped July 1. The criteria in Table 5 are the result of focusing on low defects as the dominant release criterion.
Table 5:ViewIt System Test, Beta, and Release Criteria
| System Test entry criteria | Release Criteria |
|
|
The system test and release criteria were very similar. Since the customers were risk averse, the development process built in substantial reviews and inspections. Once the software was accepted for system test, the project team focused on stabilizing or refining the product.
Negotiating agreement on criteria
Initially, the STPs develop the milestone criteria. When you develop milestone criteria, make sure the criteria are measurable and reflect the needs of this specific project, this product release.
Once the initial criteria draft is ready, the entire project team needs to review and accept the milestone criteria. The project team needs to review the criteria and accept them as the project milestone criteria to work against. Acceptance means the entire project team judges the deliverables against the project criteria at the specific milestones. When a team accepts the milestone criteria, they are willing to do what it takes to meet those criteria.
Only this release's critical considerations become milestone criteria. As you can see from the previous examples, frequently the criteria become more stringent for later milestones in projects.
When discussing or negotiating criteria with the project team, it is critical to negotiate on the merits of each criterion [6], not on any one person's individual position. See Fisher et al's wonderful book [6] for more information about useful negotiating tactics. The negotiation results in measurable criteria to use to measure the project's progress.
Assess the product against the milestone criteria
Milestone criteria must be measurable. By measurable, I mean that you need to be able to make binary decisions (yes or no) about whether the criteria are met. When you develop the milestone criteria, you can set up what to measure, and how often to measure those aspects of the product. I've been successful publishing these criteria in the agendas for each project team meeting, and assessing project progress against the criteria at every meeting.
It is not enough to just having measurable milestone criteria and to measure the project's progress towards meeting those criteria. Even under the best of conditions, something may happen to prevent the project from making adequate progress. If you cannot meet the milestone criteria, be honest about it. Choose whether you will continue to assess against those criteria, or whether you need to change this release's goals. If you do need to change the goals of the release, make sure you narrow the goals, not shift from focusing on low defects to increasing the feature set.
Summary
When faced with aggressive schedules, insufficient staff, or incomplete knowledge of the project's feature set, define and manage the project's test priorities. I have had significant success meeting test schedules and getting the highest priority work accomplished when I used this technique:
- Define the test strategy.
- Define product quality and milestone criteria.
- Monitor the project's state against the milestone criteria.
Use this technique to get agreement on which testing work needs to be done, and when, and to know when the work has been accomplished.
Acknowledgements
I thank the following reviewers for their substantive and helpful suggestions for this paper: James Bullock, Benson Margulies, Jerry Weinberg.
References
- Crosby, Philip, Quality is Free. McGraw Hill, New York. 1980.
- James Bach, “Good Enough Software: Beyond the Buzzword“, IEEE Computer (Software Realities column), August 1997.
- Grady, Robert. Practical Software Metrics for Project Management and Process Improvement. Englewood Cliffs, NJ: Prentice Hall, 1992.
- Kaner, Cem et al. Testing Computer Software. Second edition, Van Nostrand Reinhold, New York, 1993.
- Moore, Geoffrey. Inside the Tornado, New York: Harper Collins, 1995.
- Fisher, Roger, William Ury, and Bruce Patton. Getting to Yes, second edition, Penguin Books, New York, 1992.
- Rothman, Johanna. “Defining and Managing Project Focus”, American Programmer (February 1998).
- Weinberg, Gerald M. Quality Software Management, vol. 1. , New York: Dorset House Publishing, 1992.
Like this article? See the other articles. Or, look at my workshops, so you can see how to use advice like this where you work.