Test Scenario Design for Software Products

Introduction

It is difficult to test software products, as evidenced by the problems we find in our software every day. The customer risks of not adequately testing software can be as minor as user inconvenience, or as major as loss of life. Not only are there customer (end-user) risks for inadequately tested software, there are also corporate risks. Especially for mature products, an inadequately tested product can have serious negative consequences for a company.

Organizations are actively working to improve their testing processes and increase the number of people devoted to product test. However, many organizations still rely on structured manual tests that test the product at its uppermost levels, the levels closest to the user. If you need to reduce the number of defects or meet stringent performance or reliability criteria, then user level testing is not adequate. You must look at the product architecture and develop tests based on the major modules, the interactions between those modules, and the flow of data between modules.

It is possible to design and implement tests for software products that address the market requirements and are based on product architecture. If you are willing to define test scenarios, and are willing to write test code, you can design and implement appropriate product tests to meet your defect, performance, and reliability requirements.

This paper will present a disciplined approach to test scenario development, focusing on those efforts to improve product quality. The topics for discussion are:

1. Product architecture and decomposition for functional testing

2. Performance and load testing design and measurement

3. Reliability testing and measurement

4. Installation testing

We will discuss black box and white box testing techniques in the context of each topic.

Product Quality

During a product’s life, there are different focal points for the customers’ perceptions of quality. Grady [Grady92] claims there are only three quality goals: minimize the schedule, maximize the features, and minimize product defects. Moore [Moore91] shows the different phases of a product’s lifecycle. Table 1 shows the interaction between product lifecycle and customer requirements for product quality.

Customer Desire for Quality Product Introduction Initial Product Acceptance General Product Acceptance Near Obsolescence
Minimize Schedule High High Medium Low
Maximize Features Low High Medium Low
Minimize defects Medium Medium High High

Table 1: Interaction between Quality and Product Lifecycle

During a product introduction, the product must solve enough of the customers’ problems, but does not have to be fully featured. The product does have to get into its customers hands quickly, or a competitor will take away the market. Once there is some product acceptance, features become more much more important, and time to market is still critical. Once the product has taken over its marketplace, customers are much more interested in having a well-running, predictable product, so low defects is critical. And, if a product does not maintain a low level of defects, its “cash-cow” position in the market will be limited.

Given that customers see the product differently as the product matures, the product development processes should evolve as the product evolves. In the table above, there are no cases where all facets of quality have to be satisfied in any one release. There is only one case where there are two possible high priorities, and even then, a particular product can generally choose one of the high priorities as the top priority. So, the notion of “good-enough” software [Yourdon95], whether the good-enough is features, time to market, or defects, is highly useful throughout the product lifecycle. Software product testers should develop appropriate testing strategies for the different needs of the product’s lifecycle.

Product Architecture and Decomposition for Functional Testing

Product testing is most effective when the tester uses the product architecture to decide on the strategy and approach to the test effort. Using the product architecture to prioritize testing allows the tester to make the necessary tradeoffs between time to market, very low defects, or multiple features.

A generalized product structure is shown in Figure 1. Unless the product is a standalone product, independent of operating system, the network and operating system will be the underlying substrate of the product. Working up from the bottom of the picture, the application substrate is the enabling technology. The size of the substrate may be physically small (measured in LOC or in function points), but is the critical part of the application. Then comes the working core of the product- the parts of the product that do all of the main pieces of work. At the top is the Graphical User Interface, the GUI.

genericarch
Figure 1: Generic Application Architecture

The GUI is at the top of the picture for a very specific reason- it is generally intended to be a way to restrict input to the application to only legal input. The GUI restricts the number of permutations and combinations to access the application. And, if all of the legal paths through the code could be generated via the GUI, then GUI-based testing would be a sufficient methodology for test development.

However, even with a GUI, there are generally alternative paths into the application core. There may be an external API, or a command line interface, to also access the application. You must know about those code paths to determine which paths to exercise by product test. Once you understand the paths into the application, you can choose which paths to focus testing on. If you do not analyze the application, and inadvertently miss code paths, you may not be able to know whether or not you have “good-enough” software.

To reduce the chances of missing those critical paths, additional functional testing other than GUI-based testing is required. Code-based functional testing in these areas are a place to start, to reduce the risk of insufficient functional testing:

1. At the boundaries between the application core and substrate; and between the application substrate and the operating system/network.
2. Complete code-path testing of the application substrate.

Functional Testing Summary

You need to decide how many of what kinds of defects are acceptable, and then assess the risks of not finding the unacceptable defects. White box testing, especially in the form of unit tests at the application substrate and core level, will dramatically reduce the number of bugs missed by GUI-based testing. If bug levels are not critical to product quality and the ability to do follow-on releases, then black box tests, even just GUI-based tests may be sufficient. It is crucial to assess the risks of your test investment for future releases.

Performance Testing Design and Measurement

Performance testing is an area that frequently gets short shrift from strictly GUI-based testing techniques. Performance is measured in two distinct ways: load, which is the volume of tasks to be performed; and stress, which is peak bursts of activity. Some examples of load conditions:

Telephone switch calls handled on an ongoing basis

Compiler handling huge text files, multiple lines per logical line

Some examples of stress conditions:

Anything that can induce a race condition

Telephone switch busy hour

Interactive program accepting input

The purpose of performance testing is to provide a systematic determination of the load and stress capabilities of the system. These scenarios can demonstrate that product performance can meet its requirements. This requires that someone understand the product sufficiently to define performance requirements. “Be fast” is not a performance requirement- it is not measurable! A specific improvement over a previous release or a competitor’s product is measurable, and is something to design test scenarios around.

Design of test scenarios for performance must take into account how the user actually uses the system, and how the user should use the system. Measurement of those scenarios should count the resources used by the tests, in addition to the outcome of the test. In fact for stress conditions, there may be a graph of degrading performance, as each input gets added to the load.

A code profiler is a useful tool for finding the internal loops and seeing the percentage of time spent in each loop level. Black box testing is not very useful in performance testing, except as a way to generate the kinds of scenarios a user might create. One the general scenarios are defined, white box techniques are most useful in setting up exactly the situation(s) you want to model.

Measuring product performance, assuming that performance is crucial to market success of your product is an ongoing process. As you plan the experiments, decide if the performance runs will be measured per build, at specific times, over releases, or some combination of these times. Also when designing these performance experiments, consider your data integrity. It is most useful to have precisely the same data run after run, so that your results are dependable.

Reliability Testing and Measurement

In addition to performance testing, reliability testing will give you a systematic determination of the product’s stability- the ability to produce the same results time and time again, over a long period of time. Some examples of reliability:

System availability: Ability of system to stay up for some number of days/weeks/years without restarting or rebooting

Maintenance of data integrity, especially in a database-type of system, over time

Reliability, just like performance, needs to have specific and measurable requirements, so that scenarios can be designed to verify that the products meets its reliability requirements. Just “being reliable” is insufficient- measuring the data lost, or yearly downtime are measurements that the company can then assess.

Reliability measurements need to be planned in a similar way to performance measurements. It may be possible to use GUI-based black box tests for some parts of reliability testing, such as system availability under normal use. However, if you want to simulate product use, and accelerate it, white box tests will have to be developed.

Your product architecture, such as client/server architecture, or distributed systems, will help you focus appopriate test scenarios to meet the measurement and product requirements for reliability.

Installation Testing

Product installation is the first thing your customers see. It is worth some time thinking about what you customers have to do, and how to test it. The major categories of product installation test (on all media) are

1. New installation

2. Installation over previous version (previous release and beta)

3. Installation over incomplete version

4. Documentation: Verify install and uninstall instructions

5. Uninstall

For all installations, verify:

1. All components are installed.

2. Nothing extra is installed.

Organizations frequently miss the installation over previous versions and incomplete versions. Especially if you cannot use a standard installer, you need to test the installation and uninstallation. A checklist to verify the components are installed properly, and that the software can be started and somehow run can be useful.

Installation and uninstallation is not frequently dependent on product architecture. The only exception is if the installation media is somehow involved in the running of the product.

Summary

This paper is a start at thinking about how to assess product quality risk, and the design of test scenarios to address those risks. There are tremendous opportunities in the functional test area, to use the product architecture to guide product test development. There are additional opportunities in the performance and reliability testing areas, to focus on what is really important to the product, not what is easy to test.

Software product test professionals must expand their product test perspective, from black box GUI-based testing, to the full range of black and white box code-based testing. We must assess the requirements for product quality, and take appropriate actions to implement the tests to verify that quality.

References

Grady92: Grady, Robert. Practical Software Metrics for Project Management and Process Improvement. Prentice Hall, Englewood Cliffs, NJ. 1992.

Moore91: Moore, Geoffrey. Crossing the Chasm. Harper Collins, New York. 1991.

Yourdon95: Yourdon, Edward. “Good Enough” Software, Guerilla Programmer, April 1995.

Additional Testing References

Beizer, Boris, Software System Testing and Quality Assurance, Van Nostrand Reinhold, 1984.

Beizer, Boris, Software Testing Techniques, 2nd ed., Van Nostrand Reinhold, 1990.

Demarco, Tom, Why does Software Cost so Much? and other puzzles of the information age, Dorset House, 1995.

Hetzel, Bill, The Complete Guide to Software Testing, 2nd ed., Wiley-QED, 1988.

Kan, Stephen H., Metrics and Models in Software Quality Engineering, Addison Wesley, 1995.

Kaner, Cem et al, Testing Computer Software, 2nd ed., Van Nostrand Reinhold, 1993.

Marick, Brian, The Craft of Software Testing, Prentice Hall, 1995.

Myers, Glenford J., The Art of Software Testing, John Wiley and Sons, 1979.

Perry, William E., How to test Software Packages, John Wiley and Sons, 1986.

Royer, Thomas C. , Software Testing Management: Life on the Critical Path, Prentice-Hall, 1993

Weinberg, Gerald M., Quality Software Management, vol 1 Systems Thinking, Dorset House Publishing, 1992.

 

© 1996 Johanna Rothman.

Like this article? See the other articles. Or, look at my workshops, so you can see how to use advice like this where you work.

Leave a Reply