Quality Driven Project Management
© 1997 Johanna Rothman.
Product management is concerned with implementing new features. Developers are concerned with improving the existing features. How does a project manager bring these concerns together in a way that ensures both product direction and product robustness are addressed?
One way to elevate competing concerns about implementing new or old features is to negotiate the product shipment criteria early in the project. This takes into account the issues of defects, schedule, features, and where the product is in its lifecycle. Product shipment criteria can drive quality activities into the project and significantly reduce overall frustrations, enabling everyone to work towards a common goal.
This paper will address:
- A model of product quality definition
- Examples of “ship” criteria and what to measure
- How to present data to management for maximum effect
Traditionally, product shipment decisions were made based on how the software product “felt” to the tester or developer. After running the product for some period of time, the developer or tester would pronounce the product fit or unfit for shipment.
Many organizations now recognize that decisions based on “gut feel” are insufficient (1). These gut feel decisions are insufficient because they do not address the customers’ concerns. These organizations have come to this realization from any number of different perspectives. We will focus on project completion measurements. It is possible to define specific measurements for project completion.
Product Quality Model
In order to define and assess project completion risks, the true measure of product quality must be defined first. Grady (2) contends that quality is defined as choosing one of the three priorities below:
- Minimize time to market (decrease engineering cost and schedule)
- Maximize customer satisfaction (determining features and working with your customers)
- Minimize defects (decrease number of known and shipped product defects)
The key to defining quality is to choose which priority is most valuable to your customers (4). For any given product, one of these priorities is the foremost priority; the other two are optimized within that context.
Some people are not convinced that they only have one top priority. Here is a check to verify that choice: If it is three weeks before the scheduled ship date, and the defect count is higher than the developers prefer, and the last feature is not quite implemented, what does management choose to do? If management’s choice is to ship anyway, then time to market is the definition of quality. If management chooses to ship as soon as that feature is installed, then customer satisfaction is the top priority. If management chooses to hold shipment in order to fix the defects, then low defects is the definition of quality. As much as your customers force you into a particular quality definition, where the product is in its lifecycle also has an effect on quality defintion.
See Table 1 for a view of the interaction between the definition of quality and the product lifecycle. This is an application of Geoffrey Moore’s (3) marketing model to software quality.
Table 1: Quality Perceptions over Product Lifetime
|Product Life/ Market Pressure
||Early Adopters (Visionaries)
||Late Majority (Conservatives)
|* Medium pressure for features does not mean pressure for new features. It means that the promised features must be in the product and must be working.
Enthusiasts or technology nuts buy initial products. These people like the thrill of playing with new technology. They want to get their hands on it and play with it (High in the Early Ship dimension). As long as the product does something very well, they will be happy (Low in the Features dimension). As long as the defects do not get in the way of using the product in a narrow sense, the enthusiast is happy (Medium in the Low Defects dimension).
Visionaries have a specific problem and their pain can be fixed by this product. They need it right away (High in the Early Ship) and since they plan to solve their problem, they have a number of demands for features (Medium on the Features dimension). The overall defect count is not particularly an issue, as long as the features they need the most work correctly (Low in the Low Defects Dimension). Visionaries willingly accept workarounds for defects or incomplete functionality.
Mainstream users are pragmatists. If they are going to buy a product, it better solve their entire problem in a robust manner (High on the Low Defects dimension). If you announce a new release, they will wait for you to ship it (Medium on the Early Ship dimension), but not too long. Since these people are pragmatic, they are not looking for any features above and beyond what you have implemented and advertised (Low in the Features Dimension).
The Conservative users wait for products to live up to their advertised reputations. They look for low defect levels to improve their success in using the product (High on the Low Defect dimension). They want the features working as you have promised, but additional features are not high on their list. They don’t particularly care when you ship the next release- they may not even upgrade to it (Low on the Early Ship dimension).
The Laggards are not interested in software that has defects (High on the Low Defect dimension). In fact, the only reason they might buy your software is to get a particular feature they cannot get anywhere else (Medium on the Features dimension). Since Laggards are so concerned with defects, they are willing to wait for product shipment (Low on the Early Ship dimension). These people want software producers to guarantee there will significantly fewer defects than in previous software releases.
For too long, product development groups (software development, software test engineering, project management) have focused only on low defects as the primary measure of product quality. Today, however, it is no longer adequate to define quality based on defect levels. Ship-quality is truly the definition of what is “good enough” (5) for the customer to use.
Once the project manager and senior management have determined the top priority for quality in the project, the criteria by which you will judge whether this product is ready to ship to customers can be addressed.
The following is an example of choosing appropriate Beta and ship criteria based on product quality. In this example, a middleware communications product was in the Early Adopter phase of its lifecycle: there were a small number of customers with money to pay for product development, because the customers had a huge need that this product promised to fulfill. The product had to be delivered when it was promised, and it had to have the features for the customer to prove to the their senior management that this product was the right tool for the job. The following list is an example of Beta Shipment criteria for this middleware product:
Middleware communications product, Beta criteria
- All code must compile and build for all platforms.
- All developer tests must pass.
- All available tests for Beta customer (client side part of the product) must run and pass.
- All current bugs are entered into the bug-tracking system.
- First draft documentation is available and shippable to customers.
- The code is frozen.
- Technical support training plan is in place, and the people are in place.
- There are less than 36 open high priority bugs.
Note that the criteria are based around getting to a specific ship date, not necessarily reducing bugs or adding features. This particular organization started Beta as early as possible to maximize their ability to work with their customers on product features. Note that even when discussing the same product there is a contrast between the Beta criteria above and the following Product Shipment criteria:
Middleware communications product, Shipment Criteria
- All code must compile and build for all platforms.
- Zero high priority bugs.
- For all open bugs, documentation in release notes with workarounds.
- All planned SQA tests run, minimum 90% pass.
- Number of open bugs decreasing for last three weeks.
- All Beta site reports obtained and evaluation documented.
- Reliability criterion: Simulate 1 week of usage by sending a minimum of 200 messages of varying sizes to and from varying platforms with varying classes of service.
- Final draft documentation available, complete and submitted to <corporate organization>.
- A working demo runs on <previous release>.
- Verify that tokens reduce on-air time by 25% from <previous release>.
- At least two referenceable Beta sites. (Customers who were sufficiently happy with the product that they would agree to be contacted by potential customers.)
The middleware communications product shipment criteria focus on getting to the ship date with reasonable software, but not particularly low defect software. The features were concentrated in the performance and reliability areas in order to make the customers successful when they deployed their application.
As another example, the following are the ship criteria for a machine vision product in its Mainstream phase.
Machine vision product, Shipment Criteria
- All system tests executed (>90% passed).
- Successful execution of any “Getting Started” sequence.
- Results of executed tests must be discussed with product management team.
- Successful generation of executable images for all appropriate platforms.
- Code is completely frozen.
- Documentation review is complete.
- There are zero showstopper bugs.
- There are fewer than 4 major bugs, and 15 minor bugs.
These ship criteria are much more focused on defects as a primary definition of quality. The ship date is still important, otherwise the defect levels would be lower and system test pass results would be higher.
The project manager has now considered what quality means to the product. For this product release, the project manager has negotiated ship criteria with the necessary stakeholders. Now the project manager and the project team needs to be able to measure the project against the criteria. This measurement gives both the project team and senior management information about the current state of the project.
Most of us are comfortable with defect data trends. However, you need to consider how much data to collect and how to show and evaluate the trends. Contrast Figures 1 and 2 below:
Figure 1: Initial Defect Trend Data
The first figure can be deceptive to people who are not used to looking at data about their projects. It is possible to think that since the rate of finding new bugs was not going up, the product is sure to be ready for Beta soon. If you think this, then you will be surprised to hear that the downward trend of open number of bugs is the true indicator for Beta.
Figure 2: Complete Defect Trend Data
Figure 2 shows the data collected for the entire project. (The data in Figure 1 is the first seven weeks of Figure 2.) New defects were still being found at a consistent rate until week 18 of the project. Although management had decided to ship an initial Beta version at about week 14, they wisely limited the number of Beta sites, until they were ready for another Beta version.
This product was in transition from the Early Adopter to Mainstream phase of its life. Since the Mainstream users want the product as quickly as they can get it, with as few defects as possible, this phased Beta met the organization’s needs. The organization was able to give some customers early software. In exchange, the organization was able to continue to fix defects and had an opportunity to test those fixes with the customers.
Aside from defect data, the system test data can be quite illuminating. Consider Figure 3.
Figure 3: Initial System Test Trend Data
Figure 3 shows about 90% tests running, with about 80% tests running passing. The interesting piece of data here is that the number of tests planned continues to rise. This was an indication of non-frozen features. (The test group kept finding out about new features that they then needed to write tests against.) In fact, the number of tests planned did not stop growing until seven weeks before the project shipped. Once the number of tests planned stopped changing (and the features stopped changing and increasing), the test group could make progress with the tests, and the developers could make progress with the defects.
Figure 4: Complete System Test Trend Data
From these examples, it is clear to see that if project managers only look at the data at the beginning of the project, they will have a distorted view of the project progress. At minimum, weekly views of the data are necessary.
Since this is a case of a Mainstream product, the number of features was quite important to the project and product’s success. However, this may be an indication that requirements were not being managed appropriately.
In addition to reviewing weekly trends, it is critical to explain the data to management. Some management is not comfortable translating shipment criteria data into the meaning of the data on the product.
Products evolve during their lifetime. The most effective product releases are those that further the company’s goals while meeting the customer needs. Defining product shipment criteria for a given project makes the company clarify current product and specific project goals. Once everyone has agreed on what the company needs from the product and what the customers need from the product, the quality criteria are clear.
Project managers must choose the correct quality criteria for their projects, define those criteria in terms of measureable goals, and then measure progress towards those goals throughout the project.
It is necessary to decide at the beginning of the project what quality is for the product, at its current stage in the market lifecycle. A project manager must use what is of value to customers to set shipment requirements including features and functionality; promised shipment dates; and product defect levels. Once the project manager has measurable shipment criteria, it is possible to measure progress to those goals on an ongoing basis.
- Crosby, Philip. Quality is Free. Penguin Books, New York. 1980.
- Grady, Robert. Practical Software Metrics for Project Management and Process Improvement. Prentice Hall, Englewood Cliffs, NJ. 1992.
- Moore, Geoffrey. Inside the Tornado, Harper Collins, New York. 1995.
- Weinberg, Gerald M. Quality Software Management, vol. 1. Dorset House Publishing, New York. 1992.
- Yourdon, Ed. “Good Enough” Software, Guerrilla Programmer, April 1995.
Like this article? See the other articles. Or, look at my workshops, so you can see how to use advice like this where you work.