QW2000 Paper 3T1

Mr. Robert Bauer & Mr. Russell F. Ingram
(Levetate Design Systems)

Building a Parallel Test Environment

BACK TO QW2000 PROGRAM

Presentation Abstract

For the past two years, we have been developing and enhancing a test strategy called early regression testing. In this strategy, we provide development teams with automated tests that enable them to run complete regression testing prior to feature integration - at our organization multiple development teams simultaneously enhance the core software product, so it is not uncommon to have several teams making concurrent changes to the same source base.

Prior to the introduction of early regression testing, development teams would run their own unit tests, integrate the various features and turn the software over to test teams. One problem with this approach is that when a failure is observed during testing, development has to figure out which team is responsible for fixing the problem. Often the problem is assigned to one team only to be transferred to another team when the first team figures out that the problem is not in their code. Sometimes a problem is passed back and forth among several development teams.

Early regression testing, on the other hand, enables the developers to identify and fix problems associated with their code prior to integration. Results of this approach were significant: More than 70% of all failures detected by the regression suites were discovered prior to feature integration. By shaving nearly 2 months off of the schedule, we helped to significantly reduce the delay of an already delayed product.

Early regression testing makes a lot of sense, but to make it successful requires a powerful infrastructure:

(1) Tests must be automated - Development teams are under a lot of pressure. If it takes a lot of effort to locate, run, and analyze tests, developers will devise their own tests. These unit tests will help them with whatever specific enhancement they are making; however, these unit tests will never be as encompassing as the regression suite through which the software is certified.

(2) Results analysis must be automated - Automating the running of the test and leaving the test engineer with hundred's of pages of diff'd output to analyze borders on the inhumane. We developed a "rule-based" algorithm that analyzed the differences (between the control and output) to determine whether the differences constituted failures or were acceptable.

(3) Present results in a useful form and make it easy for the test engineer to do their job - Our initial release presented the test engineer with lists of tests: Those that passed, those that failed, and those that didn't run to completion. We then added an X-windows based user interface whose main feature was that with a single button press, the test engineer could bring up a side-by-side display of the control and output files (aka windiff). We provided a "button" so the engineer could sequence from one "error" to the next (the engineer sees the certified output in the control file, the actual output in the results file). For differences, the tester also sees the "rule" used to determine that the difference was not a failure.

(4) Tests and control files must be maintained under revision control. Concurrent feature development teams require the capability to independently create and enhance feature tests. These feature tests need to be merged into the overall regression suites. We often refer to the regression test accounting equation:

New Regression Suite = Old Regression Suite + New Features - Obsolete Features

Thus, as each feature team demonstrates that they can pass the the Old Regression Suite with the Obsolete Features removed and their New Feature tests, we "merge" the various regression suites enhanced and modified by the feature test teams into a New Regression Suite that is initially used for integration testing. Later, after product release, the New Regression suite is used for maintenance and emergency release testing. Also, as work begins on the "next" release, the new regression suite becomes the old regression suite as the cycle is repeated.

As the infrastructure for early regression testing proceeded, it was clear that a significant problem remained: As we automated away much of the tedious and error-prone manual efforts, execution time of the tests began to dominate the overall cycle-time. Also, roughly at the same time, the next generation hardware was being released that offered a significant improvement in the database throughput capacity. Although we employed such optimizations as overlapping test execution with results comparison - achieving a 33% reduction in cycle-time, we knew that the next generation of database machines would be under utilized in the present test environment.

About the Author

Robert T. Bauer was the technical lead and architect for the PTE environment. He has a MS in Computer Science. Robert is presently a principal engineer with Levetate Design Systems where as a member of the formal reasoning group he is responsible for developing a theorem prover for use in formal verification of software and hardware.

Russell F. Ingram is the manager of the development group that created the PTE. He also designed the architecture for the test lab and led the technical system administration efforts. Russell has a BS in Electrical Engineering and is presently involved in coordinating off-shore development as well as building a new corporate computing environment for NCR.

BACK TO QW2000 PROGRAM