Defining and Managing Test Priorities for COTS Software

© 2000 Johanna Rothman. This paper first appeared in Software Quality Professional, Volume 2, #3, June 2000.

INTRODUCTION

Software publishers create commercial off-the-shelf (COTS) software when they think there is sufficient demand for a commodity-type product. By avoiding custom software development, these publishers can create an economy of scale, increasing the likelihood of profitability.

Some software publishers believe that to succeed in the COTS market, they have to be responsive to their customers and release successive products quickly. This responsiveness and push to release creates several problems for the development and testing staff:

  • Responsiveness may cause the requirements to change rapidly and frequently, even near the end of a project. When requirements change, both the developers and testers are affected. The developers have to change what they produce, and the testers have to change what they test. Both of those changes can cause project delays.
  • The push to release quickly means that the testers and test manager may not have discussed the product quality requirements–what is truly good enough to ship–with the rest of the project team or with senior management. Without those conversations, one person, generally the test manager, is assigned the role of quality keeper for this project, a classic testing mistake (Marick 1997).

One way to manage the rapid changes from responsiveness and the lack of agreement on what is acceptable to ship is to develop criteria for allowable project changes and criteria for product release. Defining what is acceptable to change and what is acceptable to ship can help testers and test managers avoid many of the headaches, fights, and us-vs.-them discussions at the end of the project.

DEVELOP CRITERIA TO ASSESS REQUIREMENTS CHANGES

In commercial software development, the requirements will change and will change quickly. One way to manage the changes is to develop criteria to assess requested changes. Two questions are especially helpful here:

  • “What happens if we ship with these changes?”
  • “What happens if we don’t ship with these changes?”

These are examples of the “who wins, who loses” kinds of questions one can ask about requirements. (Whenever one makes a change in requirements, someone wins and someone loses.) Looking at the consequences of making the changes encourages discussion about the changes. Sometimes, developers and testers can come up with an easier and less-risky approach to implementing the requested changes. Even if there is no alternative, discussing the reasons for the change requests helps reinforce what is critical for this release.

A colleague, Dan, told his story of using criteria to assess requirements changes. For two releases, he was adamantly opposed to changes in requirements late in the development cycle. Management overruled him. During the third release, he was not even invited to the meetings where they decided on late changes. Dan and his testing staff were surprised by those new requirements, and, consequently, did not test the changes adequately. After the third release was shipped, some customers were upset with the bugginess of the software. The customers complained to senior management. Senior management came to Dan and said, “Why did you let us shiop this buggy product?” Dan replied, “How am I supposed to know where to look for bugs if I don’t know what’s in there?” Senior management realized that keeping Dan out of the meetings was not helpful.

Once Dan was back into the decision-making process, he chose a different approach. Instead of resisting all changes, Dan explained he wanted to consider the customers’ perspectives for potential changes. Dan suggested they ask about the consequences of shipping with and without the changes, instead of just discussing how to insert the changes. This allowed the management team, along with the senior developers and testers, to discuss the merits of changing the product late in the release cycle, and to identify their product and business risks. Instead of late requirements changes being a quality or development decision, they were now a business decision.

In the same way, knowing when a product is ready to release is a business decision. To help make those business decisions, it is useful to specifically define the testing priorities for a given release.

DEVELOP TESTING PRIORITIES

Define testing priorities by developing a test strategy and then defining product quality and milestone criteria. Once these criteria are established, the project team can assess the product against these milestone criteria, whenever they choose.

Develop a Test Strategy

The first step to defining and managing the testing priorities is to define a test strategy. The strategy helps one decide what must get done and when. The strategy addresses the following concerns. Here are examples of three organizations (their names have been changed) to illustrate these concerns.

  • What will and will not be tested and how. Will some functionality not be tested? How will exploratory testing be used? Will there be any regression tests or create regression tests? Where and when will automation be used? Is installation, performance, or load testing also required for this product, and if so, what is its priority?

    SmartStore’s test manager realized that the test group could not accomplish all the testing the test group wanted to perform. SmartStore’s flagship product was a fast storage and access product. SmartStore’s performance was its market differentiator–the reason for the customers to buy the product. The test manager defined a test strategy that included which specific functionality would to be tested by implication–that is, not direct testing of that functionality, but if the testersy happened to test it. In addition, the test manager prioritized the performance testing with the testers.

    SmartStore’s testers had specific plans for how to test performance and what kinds of performance to test. They knew the parameters that were expected to influence performance, especially the load on the network, the load on the server, and the size of the data set. The testers defined six test scenarios that represented the performance they were willing to test on an ongoing basis, and then created automated tests to be able to run those tests for every build. Not all functionality was tested under load. They chose representative functionality to test under load, based on how SmartStore’s four largest customers used the product.

  • What priorities will be applied to the testing work? In what order will the features be tested? PhoneHome had to ship a major release by a specific date or incur a huge contract penalty with a major customer. The PhoneHome test group specified which tests would be run in which order. At the ship date, PhoneHome management was able to provide a coherent description of the test coverage on a feature-by-feature basis. The customer agreed not to use the untested features. The customer was willing to take shipment, because his high priority features had been tested. The other customers were willing to wait for the complete release, so the PhoneHome test group was able to continue their testing, to test the then-untested features.
  • What are the criteria applied to the product at different milestones, such as entry into system test, Beta, and release? What does “ready” mean for this product at these milestones?
    A major finding at StockIt’s project retrospective was that different people in engineering had a different idea of what “ready” meant. Some developers thought the software was ready if it compiled. Some testers thought the product was ready when no more bugs could be found. Some managers thought it was ready when the ship date arrived. The project manager and the chief architect had their own definition of “ready” that was less one-sided. Unfortunately, the project manager and chief architect had not explained their criteria for each milestone. Without agreement on “ready” at the different milestones, the organization was in continuous disagreement about the state of the product and its readiness for testing and shipment.

    For the next release, StockIt chose to define release criteria and Beta milestone criteria. Employees reviewed those criteria at each project team meeting, and were able easily decide when the project was ready for Beta and ready for final release.

The test strategy describes explicit choices of what will be tested, in what priority, and how to know when the product is shippable. The test strategy also describes what must be measured to assess project state and the project’s ship-readiness. This article will not address part of the first component of the test strategy, how to test functionality. For ideas on functionality testing, see (Kaner et al. 1993).

Define Product Quality

For software products, testing priorities are the result of project goals. Market forces make this even more obvious for COTS software. Every project has its own goals. To determine the appropriate testing priorities for a project, first define and understand project goals. The end goal in COTS projects is to make money for the company. (For some projects, the goal is not necessarily to make money. Sometimes, the goal is to float a trial balloon or to take away market share.) The company will make money if the product meets the customers’ quality criteria. Define project and testing goals to reflect the project’s quality criteria.

The three possible externally visible project goals for software projects are (Grady 1992):

  • Minimize time to market by delivering the release as soon as possible
  • Maximize customer satisfaction by delivering a specific set of features
  • Minimize number of known and shipped defects

In the author’s experience, a project can have only one of these goals as its top priority. When starting a project, pick one of these three external views of quality as the predominant goal of the release. There are other internally visible project quality goals: reducing cost of development, reducing the number of people required, or reducing the tools needed to do the work. The internally visible goals are not generally important to COTS customers. They do not care how a business is run; they only care about the result. In addition, the steps taken to decrease the cost of development frequently increase the cost of quality: the organization simply cannot generate a product within the time, feature, and defect constraints if overall product development is not sufficiently funded. Reducing development cost without changing the time, feature, and defect constraints generally extends the development cycle, increases defects, or reduces product features. (Abdel-Hamid and Madnick 1991).

The project manager and test manager then address the other two goals within that context. If the test manager accepts and understands this prioritization, he or she can actively decide to shape the testing process to achieve the goal. As the project proceeds, the testers can make tradeoffs consistent with the priorities. If the test manager or the project manager refuse to accept that only one of these goals can be paramount, the project is not likely to make any of its goals.

Choose which aspect of quality is most valuable to customers. Then work with the project manager to make trade-off decisions about the other two goals within the context of the primary goal.

CHOOSE THE PRIMARY QUALITY GOAL

Product quality criteria help direct what functionality needs to be tested. The quality criteria are the definition of what customers want in a product–what defines the product as “good enough” to ship (Bach 1997).

One way to choose the right quality priorities for a project is to use Geoffrey Moore’s high-tech marketing model (1995) to understand what the market requires at this time. Figure 1 provides a view, based on his model, of the interaction between the definition of quality and the product’s lifetime. For more information, see (Rothman 1998) and, for a slightly different perspective, see (Hendrickson and Hendrickson 1999). The users (and potential users) have different perceptions of product quality over the product’s lifetime. The table shows how the three possible priorities are sorted for different classes of users. As a product evolves, it moves from one target user group to another, and (in the model) the priorities shift accordingly.

Figure 1: Quality Perceptions over Product Lifetime

Product Life/ Market Pressure Intro-duction Early Adopters Main-stream Late Majority Skeptics
  (Enthu-siasts) (Vision-aries) (Pragma-tists) (Conser-vatives) (Laggards)
Time to Market High High Medium Low Low
Feature Set Low Medium Low Medium* Medium
Low Defects Medium Low High High High

* Medium pressure for features does not mean pressure for new features. It means that the promised features must be in the product and must be working.

To define product quality, choose which priority is most valuable to the customers. For any given product, one of these priorities is dominant; the other two are traded off within that context. Some people are not easily convinced that there is only one top priority. Here is a check to verify that choice: If it is three weeks before the scheduled ship date, the defect count is higher than the developers prefer, and the last feature is not quite implemented, what does management choose to do? If management chooses to ship anyway, then time to market is the top priority. If management chooses to ship as soon as that feature is installed, then customer satisfaction through features is the top priority. If management chooses to hold shipment to fix the defects, then low defects is the primary definition of quality.

Since quality is value to someone (Weinberg 1992), one way to think about quality criteria is to ask what customers value most in this product: What will make them decide to buy or not buy this product? The customers’ primary buying choice–what they value, their definition of quality–defines quality for this release.

Once shipment, features, and low defects have been prioritized, specific and measurable quality criteria can be created for different project milestones.

DEFINE MILESTONE CRITERIA

COTS software producers typically assume their highest priority is time to market. Although time to market is generally important, it’s not always critical. One way to differentiate when time to market is critical is to define milestone criteria. Milestone criteria help the project team and senior management understand when the product development team is meeting any of the quality criteria, at specific times in the project. Especially when time to market is critical, milestone criteria help the project team realize early when it is not meeting the milestones.

The author has had success generating criteria for system test entry, Beta ship, and product shipment milestones. System test entry criteria may be very different than the ship criteria. Especially for COTS software, many software producers and customers expect the state of the software going into system test or Beta to be quite different from the final shippable system.

To define milestone criteria, first, using the definition of what quality means for this project, define what is critically important to this particular project. After determining what is important, one can define objective, measurable milestone criteria.

Negotiating Agreement on Criteria

Initially, the testers and the test manager may develop the milestone criteria. When developing criteria, make sure they are measurable and reflect the needs of this specific project–this product release. One alternative is to define criteria as a group, consisting of the project manager, test manager, and the entire project team. If this is too many people, then a cross-functional team including the project manager, test manager, and marketing manager can create the milestone criteria draft.

Once the initial criteria draft is ready, the entire project team needs to review and accept the milestone criteria. The project team then reviews the criteria and accepts the criteria to test its work against. Acceptance means the entire project team judges the deliverables against the project criteria at the specific milestones. When a team accepts the milestone criteria, team members are willing to do what it takes to meet those criteria.

Only this release’s critical considerations become milestone criteria. Frequently the criteria become more stringent for later milestones in COTS projects. When discussing or negotiating criteria with the project team, negotiate on the merits of each criterion (Marick 1997), not on any one person’s position. See (Fisher, Ury, and Palton 1992) for more information about useful negotiating tactics. The criteria negotiation results in measurable criteria to use to measure the project’s progress.

Others have had success with gradations of milestone criteria, however, the author specifically creates binary, unambiguous milestone criteria to know whether the criteria are met–if the criteria are either complete or not complete. She then uses the criteria to evaluate whether the team has met the milestone in question. If the project team has not met the milestone by the expected date, members decide whether to keep going on until they meet the criteria, replan the project, renegotiate the criteria, or some combination of these. Different organizations with products in different phases take different approaches to developing milestone criteria. Following are three examples.

SmartStore

SmartStore’s product was in the early adopter phase. Time to market was a driving factor for its release cycles. Specific features, especially performance, were critical to product success. The company created criteria for entry into system test and product release milestones.

The criteria for starting the system test are not as demanding as the criteria for product shipment. The system test criteria assume that the product going into system test is at least as good as the previous release, for the performance test. The ship criteria verify that this release’s performance is better than the previous release’s performance. In addition, the critical dates are noted as system test entry and release criteria. SmartStore was going to ship its product that quarter. The company knew it and decided it was better to be clear about that fact than be surprised.

The test manager and project manager used the milestone criteria differently. The project manager used the dates as a way to test the progress the developers said they had made, and as a way to identify risk in the project. The test manager used the dates to prioritize the test effort, and to identify areas of testing risk.

The project team defined these criteria by thinking about what was important to SmartStore and its customers. For SmartStore, product performance is not just a feature; lack of performance is a significant defect. Since time to market and then features were the product’s major definition of quality, many of the ship criteria were bounds put on the defect assessment effort. Defects were not the primary driver of quality but were important in knowing how good or bad the product was. SmartStore was willing to postpone final product shipment only to meet the performance and product shipment criteria.

PhoneHome

Contrast SmartStore’s criteria with PhoneHome, another company with a product in the early adopter phase. Time to market was a driving factor in addition to a particular set of features. If it did not make its market window, it would have lost the market. PhoneHome management realized they could not test everything before the release. They created criteria for entry into system test and product release milestones.

Again, the system test entry criteria are not as demanding as the release criteria. The system test entry criteria assume that the product going into system test is at least as good as the previous release, and has specific additional functionality (module D). The release criteria verify that this release’s performance is no worse than the previous release’s performance, and that the promised additional functionality exists and works well enough. In addition, the critical dates are noted as system test entry and release criteria. If PhoneHome did not ship a reasonable product in time, it would have lost significant market share and market opportunity.

Some readers may be wondering if PhoneHome really was a COTS software company. This particular release was for a specific small set of customers. If the company succeeded with this set of customers, it would have the customer references to succeed in the larger marketplace. The product is now a commercial product, sold all over the world. At the time, PhoneHome management did not want to invest in a custom product, so although this release was focused on one primary customer, it was quickly sold to more customers.

StockIt

StockIt was firmly in the mainstream of its primary market and was working on acquiring new markets with an extension to the software. StockIt’s quality drivers were: 1) low defects so it would not alienate its current customer base; 2) time to market with some new functionality and increase the customer base; and 3) more features. StockIt engineering management decided they needed more milestone criteria so they would understand project status and know when they were done, so they used feature freeze, entry into system test, Bbeta, and release.

StockIt’s milestones and criteria were quite different from SmartStore and PhoneHome’s because it was working to retain and attract different kinds of customers.

ASSESS THE PRODUCT AGAINST THE MILESTONE CRITERIA

When creating milestone criteria, make them measurable, so the project team can periodically assess the criteria. Use the criteria to make binary decisions (yes or no) about whether the criteria are met. When developing the milestone criteria, set up the measuring system to obtain the measurements. One technique the author has used is publishing these criteria in the agendas for each project team meeting and assessing project progress against the criteria at every meeting. Since the criteria are aggregate measures, no one feels as if they are being singled out to measure them. Once one has measurable criteria for different milestones, it is easy to measure the project against those criteria.

It may not be possible to meet all the criteria in every project. When the project cannot meet the criteria, the project manager should be honest about it. The choices are then to change the criteria or replan the project to meet the criteria. If time to market is critical to the project, then either replan the project or work with the project team to reduce the criteria, and make it clear to the other people in the company the effects of reducing the criteria.

SUMMARY

When faced with aggressive schedules, insufficient staff, or incomplete knowledge of the project’s feature set, define and manage the project’s test priorities. Define the test strategy, define product quality and milestone criteria, and then monitor the criteria against the milestone criteria when one wants to get the highest priority work accomplished.

ACKNOWLEDMENTS

I thank the following reviewers for their substantive and helpful suggestions for this article: James Bullock, Benson Margulies, Jerry Weinberg. I also would like to thank the SQP reviewers.

REFERENCES

Abdel-Hamid, T., and S. Madnick. 1991. Software project dynamics. Englewood Cliffs, N. J.: Prentice Hall.

Bach, J. 1997. Good enough software: Beyond the buzzword. IEEE Computer (August).

Black, R. 1999. Communication on sw-test-discuss e-mail list.

Fisher, R., W. Ury, and B. Patton. 1992. Getting to yes, second edition. New York: Penguin Books.

Grady, R. 1992. Practical software metrics for project management and process improvement. Englewood Cliffs, N. J.: Prentice Hall.

Hendrickson, K., and E. Hendrickson. 1999. Quality along the lifecycle. In Proceedings of ASM/SM ’99, San Jose, Calif.

Kaner, C. et al. 1993. Testing computer software, second edition. New York: Van Nostrand Reinhold.

Marick, B. 1997. Classic testing mistakes. STAR.

Moore, G. 1995. Inside the tornado. New York: Harper Collins.

Rothman, J. 1998. Defining and managing project focus. American Programmer (February): 19.

Weinberg, G. M. 1992. Quality software management, vol. 1. New York: Dorset House Publishing.

 

Like this article? See the other articles. Or, look at my workshops, so you can see how to use advice like this where you work.

Leave a Reply