I’ve been at the STAR conference this week, and Cem Kaner’s keynote talk yesterday discussed the idea of extended random regression testing — take all your programmatic tests, and run them in random sequences for a long time. You’ll find defects you cannot find just running the tests by themselves. Here’s the logic behind this technique:
- The systems we develop and test today are more frequently many thousands or millions of lines of code, rather than only single-digit thousands of lines of code.
- You can’t adequately (and note, that’s not fully, that just adequately) test such a system with only manual testing, you need programmatic testing to adequately cover a large system.
- Once you have created programmatic tests, you can string the tests together in long sequences, and find problems that don’t show up when you test with each test on its own.
I used this technique (unknowingly) when I tested library calls for an application many years ago. I was surprised at what I did find. I expected to find functional errors in the library code. But I didn’t. Each library call was successful. But the program that ran the calls crashed. I had unknowingly perturbed memory leaks, memory corruption (uninitialized counters and pointers), things I had (naively) not expected to find. (You need testers who can write small programs to perform this kind of testing. If none of your testers have this capability, you may have second class testers. )
I can’t imagine a large system that wouldn’t benefit from using this technique. (Please tell me the circumstances under which you think this technique is not useful.) Even if you’re working in a test-driven environment, developing tests before developing code, you can still benefit from this technique as your system grows.