The increasing cost and complexity of software development for enterprise electronic commerce and other network based applications and solutions is leading software organizations in the industry to search for new and innovative ways for improving the quality of the software they develop and deliver.
However, the overall process is only as strong as its weakest link. This critical link, I would argue, is software quality engineering as an activity and as a process and testing is the key instrument for making that happen. But what should testing measure in these new and emerging environments?
As conventional programs get increasingly complex the corresponding cost, development time, and defect rates are climbing. It is reasonable to expect, based on current trends, that this problem will only grow worse with the advent of even more complex, interactive, and interdependent programming and development environments for network based information technology solutions.
However, it also appears that the more intelligent and dynamic we try to make computers and our programs the more they slow down and break. Conventional software development and testing practices and processes are increasingly strained and coming up short in the face of this growing and alarming complexity.
So what does a tester do in the face of all this complexity? A tester must take a destructive attitude toward the code and the associated system, knowing that this activity is, in the end, constructive. Testing is a negative activity conducted with the explicit intent and purpose of creating a stronger software product and is operatively focused on the "weak links" in the software. So if a larger software quality engineering process is established to prevent and find errors, we can then begin to change our collective mind-set about how to ensure the quality of the software developed.
The other problem is that we really never have enough time to test, anyway. We need to change our conceptual understanding about the development and delivery environment and use the testing time we do not have time for and apply it to the earlier phases of the software development life cycle. We need to think about testing the first day we think about the system, rather then viewing testing as something that takes place after development, and focus instead on the testing of everything. This includes the concept of operations, the requirements and specifications, the design, the code, and of course, the tests! But what are we actually testing and how do we determine those weak links?
This presentation would introduce a biologically inspired model-based conceptual framework for network-centric testing. It involves an architecture that can deal with computers and software viewed as a system of interactive and dynamic behavioral objects rather than strictly for data processing and number crunching that are themselves part of a larger system.
This conceptual framework for testing would allow for testing a range of behaviors and outcomes and the possible interactions for these application objects without the necessity for fully understanding them in advance! This could permit testing the fundamental structure of the program and the application environment and the executable functional mechanisms underneath as a testing framework that is anchored in living systems theory. It permits an "inside out" approach such that testing is based on the "genetic" makeup of the expected and anticipated dynamic "state" attributes and characteristics of the system using its own behavioral specifications as the test instruments for locating and stimulating the "weak" links.
Mr. Drake is a software systems quality specialist and management and information technology consultant for Integrated Computer Concepts, Inc. (ICCI) in the United States. He currently leads and manages a U.S. government agency-level Software Engineering Knowledge Based CenterĂs quality engineering initiative.
As part of an industry and government outreach/partnership program, he holds frequent seminars and tutorials covering code analysis, software metrics, OO analysis for C++ and Java, coding practice, testing, best current practices in software development, the business case for software engineering, software quality engineering practices and principles, quality and test architecture development and deployment, project management, organizational dynamics and change management, and the people side of information technology.
He is the principal author of a chapter on "Metrics Used for Object-Oriented Software Quality" for a CRC Press Object Technology Handbook published in December of 1998. In addition, Mr. Drake is the author of a theme article entitled: "Measuring Software Quality: A Case Study" published in the November 1996 issue of IEEE Computer. He also had the lead, front page article published in late 1999 for Software Tech News by the US Department of Defense Data & Analysis Center for Software (DACS) entitled: Testing Software Based Systems: The Final Frontier.
Mr. Drake is listed with the International WhoĂs Who for Information Technology for 1999, is a member of IEEE and an affiliate member of the IEEE Computer Society. He is also a Certified Software Test Engineer (CSTE) from the Quality Assurance Institute (QAI). He considers himself a quality advocate and a software archaeologist.