Doomed from the start

the importance of developing a sound test plan

 

Introduction

This investigative paper was inspired by a software development enhancement project that seemed to be going well, then suddenly appeared doomed during the testing phase. It was important to the Project Manager, who had been an Information Technology Project Manager for less than one year, to explore what caused the decline of a project that seemed on time and within budget. Through a process of evaluating all the project documents, as well as interviewing the Project Sponsor and product users, the Project Manager discovered what caused this project to take a turn for the worse: defects. These defects could have been avoided, or at least minimized, if a test plan had been created and executed. Not surprisingly, she was not alone in dealing with testing mishaps.

To provide the reader with insight into the value of developing a test plan, this Project Manager will share the experience that prompted this investigation. Examples of other projects with oversights in their test plans are included to ensure the reader that this is not an isolated occurrence. These faulty projects demonstrate the value of developing a test plan and assure the reader that this issue is not an isolated situation. Using best practices from various resources, she has pulled together the components of creating a successful test plan. This article presents the components that make up this framework, which serve as a guide to writing a test plan. While every testing effort may be unique, most test plans include a common content framework. The author also provides recommendations that promote project-testing success. A good test plan is the cornerstone of a successful testing implementation. There is much to be learned from best practices for both the novice and expert in the specialty of software testing (Connolly 2000).

The Doomed Project

In 1998 a project request was made by an internal client for a new application to automate and streamline its business processes. The project team formed to address this request was very small: a Project Sponsor, a developer, and the end-user. Due to the team's limited resources, little documentation was created. When a Project Manager was assigned to this team almost two years later, she found that the project was ready for release and a request was made to begin enhancement work. During several meetings with the client to define requirements for the system enhancements and to wrap up the original project request, the Project Sponsor stated that extensive user acceptance testing had taken place over the past six months. Due to the desire to continue development, approval was granted to release the product and move into the close phase, so that enhancement development work could begin. Understanding that the initial product release was tested and formally approved, the Project Manager believed that a base on which to continue development and build enhancements existed.

The enhancement project was set on a crash course from the day it started. Once testing began for the new developments, it was obvious that the initial product that rolled out into a production environment had never been thoroughly tested. Parts of the application had not even been developed. Due to the instability of the first product released, the enhancement project had no chance of being a success. No matter how clearly requirements were defined, how detailed the test plan, or how complete the project documentation, the enhancement work was destined to fail. This project would fail because the original project had not used a comprehensive testing process. There was no solid test plan.

Not Alone

Considering that this Project Manager had limited exposure to software development projects, it was easy for her to believe this failure was due to her inexperience. Research shows that inexperience is not the only cause of this type of problem. Those with plenty of experience at times forget how to plan, assuming the team knows how to work together (Connolly 2000). The Project Manager found many articles referencing projects that failed due to poor test planning and execution. Some examples of projects that failed can be found at The Software QA and Testing Resource Center (Software QA Test—FAQ1, 2001). Here are three examples:

1. News reports of September 2000 told of a software vendor settling a lawsuit with a large mortgage lender. The vendor had reportedly delivered and online mortgage processing system that did not meet specifications, was delivered late, and did not work.

2. In early 2000, major problems were reported with a new computer system in a large suburban U.S. school district with more than 100,000 students. Some problems included 10,000 erroneous report cards and failed class registration systems that left many students stranded. The school district decided to fire its CIO and to reinstate its original 25-year-old system for at least a year until the defects were worked out of the new system by the software vendor.

3. Software defects in a software product supporting a large commercial high-speed data network affected 70,000 business customers over a period of eight days in August 1999. Among those affected was an electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of outages.

The Value of Establishing a Plan

From the examples above, it is noticeable that the ramifications associated with releasing a product without thorough testing can be costly. From an economics point of view, the level of testing appropriate to a particular organization and software application depend on the potential consequences of undetected defects (Connolly 2000). Such consequences can range from a minor inconvenience of having to find a work-around for a defect, to the death of an employee (Software Testing Institute 2000). Often overlooked by software developers, but not by customers or company leaders, is the long term damage to the credibility of an organization, which delivers software to users with defects and negatively impacts future business. On the contrary, a reputation for reliable software helps an organization obtain future business. According to C. Michael Viviano, President and CEO of BNY Clearing Services, LLC, providing world-class service means meeting the needs of customers by aligning products and services with their requirements. To do this with consistent predictability, developing competence in the art of project management will be of increasing importance in the new millennium (personal communication, April 4, 2001). The development and execution of a test plan is one component of a project management methodology.

Software testing is too complex to not use a formalized process. Well-planned test projects tend to cost less, and are completed earlier than projects with incomplete test plans. These plans address testing at various stages of the development life cycle and with varying degrees of strictness. To provide ample time for testing it is not unusual to spend approximately one-third of the total test effort on planning (Perry 2000). Test planning is time consuming, but that time reaps rewards during test execution and reporting.

Definition of a Test Plan

In order to define what a test plan is, one must first understand that a test plan is itself a project plan. A Guide to the Project Management Body of Knowledge (PMBOK® Guide) defines a project plan as a formal, approved document used to guide both project execution and project control. The primary uses of the project plan are to document planning assumptions and decisions, facilitate communication among stakeholders, and document approved scope, cost and schedule baselines. A project plan may be summarized or detailed (PMBOK® Guide 2000). From this base definition, the subject of planning can be tied to testing. There are many published interpretations of software testing, however, all of these definitions authors essentially make the same conclusion. Software testing is the process of executing software in a controlled manner, in order to answer the question “Does the software behave as specified?” (IPL 1996). Perry (2000) states that the objective of a test plan is to describe all testing to be accomplished, together with the resources and schedule necessary for completion.

Planning for Testing Success

There are many existing software development methodologies available that include test plans. From reviewing several methodologies, this Project Manager found that the testing process could be broken into four phases: (1) defining the requirements, (2) planning the tests, (3) executing the tests, and (4) analyzing the results. It is difficult to isolate planning the testing effort, as planning does not occur in a vacuum. The development of a test plan can only truly begin once the requirements are clear (Connolly 2000). If testing began before developers started their work, many defects would be caught and corrected long before they became big problems (Hayes 1997).

Test planning is one of the most challenging aspects of testing. The following guidelines can help make working through the four phases more efficient (Perry 2000).

1. Start early. Even though all the details may not have been gathered, a great deal of planning can be started working from the general and working toward the specific. By starting early, resource needs can be identified and planned for before they are committed to other project needs (Hayes 1997).

2. Keep the test plan flexible. Make it easy to add test cases, test data and so on. The test plan itself should be changeable, but subject to change control.

3. Frequently review the test plan. Other people's observations and input greatly facilitate achieving a comprehensive test plan (Marick 1997). The test plan should be subject to quality control just like any other project deliverable.

4. Keep the test plan concise and readable. The test plan does not need to be large and complicated. In fact, the more concise and readable it is, the more useful it is (Abbott 2001). Remember that the test plan is intended to be a communication document. The details should be kept in a separate reference document. Calculate the planning effort. You can count on roughly one-third of the testing effort to be spent on each of the following test activities: planning, execution, and evaluation (Perry 2000). Spend the time to complete the test plan. The better the test plan, the easier it is to execute the tests.

5. Communicate. Make extensive use of communication tools such as email, groupware, networked bug-tracking tools, and change management. Insure that documentation is available and up-to-date—preferably electronic, not paper (Perry 2000). It is vital to promote teamwork and cooperation (Marick 1997). When gathering requirements, use prototypes to clarify customers’ expectations early in the development process (Hayes 1997).

These guidelines are valuable to keep in mind when developing a test plan. The test plan is as simple or complex as the project requires and should contain five major sections: an introduction, the requirements, an approach and the conclusion.

Test Plan Framework

Introduction

This section establishes the scope and purpose of the test plan. This is where to describe the fundamental aspects of the testing effort.

Purpose: Start off with stating why the test plan was developed—what are the objectives. It describes the testing strategy and approach to testing to validate the quality of this product prior to release. It also contains various resources required for the successful completion of this project. (SQATester.com 2000)

Background: Include background information. Explain any events that caused the test plan to be developed. This can include implementing improved processes, or the addition of new environments or functionality.

Scope: The scope is comprised of the testing work that must be done in order to deliver a product with the specified features and functions (PMBOK, p. 206). Provide a brief explanation of the nature of this project to create a context for understanding this test plan. What is being tested testing should be identified.

Project Testing Team: Document who is involved with the testing effort and their areas of responsibility, include the development stages and potential risks associated with this resource. In “Classic Testing Mistakes” by Brian Marick (2001), he groups them into five categories. Two of those categories are directly related to resources, (1) what is the role of the testing team and (2) who is testing.

Schedule and Milestones: Identify the resource roles and responsibilities that are required for test plan execution. Develop a project plan showing the phases, tasks, and resources. Project Managers should plan for between 30% to 70% of a project's effort to be expended on verification and validation activities, including testing (Perry 2000). Update the project plan as needed to reflect such events as changes in deadlines or available resources. Include milestones indicating when the application under test is made available for testing, and the estimated time for executing test cases. Specify if frequent builds will be provided on a regular basis during the test cycle, or when system components are expected to be ready for testing. Like any development activity, testing consumes effort and effort costs money.

Project Information: Identify all the information that is available in relation to this project. Examples of project information include: User documentation, project plan, product specifications, training materials and executive overview materials, requirements documentation, any available documentation regarding the hardware, system software, database, development language, architecture, and major functions to be tested (SQATester.com 2000). List all the deliverables associated with the testing effort and where copies of these deliverables or documents may be located.

Requirements

This section of the test plan lists all requirements to be tested. Abbott (2001), quoting Dean Leffingwell of Rational Software, estimates the between 40% and 60% of software defects and failures can be attributed to bad requirements. Any requirement not listed is outside of the scope of the test plan. The day you're held accountable for a released bug in an untested area, you'll be glad you had a written, signed document that shows what was in and out of scope when the testing effort was carried out (Connolly 2000).

Always test against a specification. These should be included in your test plan. If tests are not developed from a specification, testing is not complete. Stick to initial requirements as much as possible—be prepared to defend against changes and additions once development has begun, and be prepared to explain the consequences. The addition of a new functionality is one of the most common reasons for the extension of project deadlines and faulty testing (Abbott 2001). If changes are necessary, they should be adequately reflected in related schedule changes. The requirements included in the test plan should be all functional, technical design, and integration requirements.

Functional Test Requirements: List all functions to be tested, for example creating, editing, and deleting records. This is a fairly comprehensive listing for a full system test, or it may reference another document.

Technical Design Requirements: Testing the user interface, menu structures or other forms of design elements are also listed in this section.

Integration Requirements: The requirements for testing the flow of data from one component to another may be included if it is part of the test plan.

Test Environment Architecture: Diagram the components that make up the system under test. Include data storage and transfer connections and describe the purpose each component serves including how it is updated. Include any hardware or software requirements, such as the primary and secondary operating system.

Test Approach

Use this section to describe how the test objectives will be met for each type of testing that may be part of the test plan: unit, function, integration, system, volume, stress, performance, configuration and/or installation testing. For each subset, detail the following:

Objective: The overall objective this strategy is designed to meet. For a complete system test, this may be a statement that all functional requirements must behave as expected or as documented.

Technique: Document how test cases were developed, the tool(s) used to store them and where they can be found, how they will be executed, and the data to be used. Make notes here if tests are to be performed in cycles or in concert with other testing efforts.

Special Considerations: The test approach should contain any unique or necessary system setup, data or other test dependencies; environment conditions or other aspects that are required to establish a known state for testing.

Test Cases/Scripts: List or refer to the actual test cases and scripts to be carried out to implement the plan.

Completion Criteria: Record the criteria to be used to determine pass or fail tests and the action that is to be taken based on test results.

Assumptions and Risks: Identify any outside projects or issues that may impact the effectiveness or timeliness of the test effort. Specify contingency plans for each risk.

Tools: Document the tools employed for testing. Cite the vendor, version and the help desk number to call for support (SQATester.com 2000).

Defect Tracking and Reporting: Document the tool and process used to record and track defects. List any reports to be produced and include recipients, frequencies, delivery mechanisms and examples. Identify team resources involved in the defect tracking process. Describe any ratings, categories or classifications used to identify or prioritize defects. Following are sample categories for prioritizing defects:

Critical—denotes an unusable function that causes an abend or general protection fault, or when a change in one area of the application causes a problem elsewhere.

Severe—a function does not perform as required or designed, or an interface object does not work as presented.

Annoyance—function works but not as quickly as expected, or does not conform to standards and conventions.

Cosmetic—not critical to system performance: misspelled words, incorrect formatting, vague or confusing error messages or warnings.

Approval

Properly constructed, the test plan is a contract between the project development team and client, describing the role of testing in the project (Perry 2000). The test plan should be reviewed by all parties responsible for its execution, and approved by the test team and product and project managers. It may not be necessary, however, to share the project plan in its entirety with the Project Sponsor, an overview of the introduction may be helpful and less confusing than the full document.

For the technical personnel, provide for approval signatures at the bottom of the test plan. A walkthrough meeting with all parties in attendance is the most effective method of obtaining test plan approval (Hayes 1997). If each phase of a software development project has to pass a milepost clean and defect free, not only would defects and design errors be reduced, but project managers would have a much better handle on how schedules were slipping. The estimation process for future projects can be done more precisely.

Conclusion

Every project has its own unique personality, therefore the testing process is constantly being customized and improved. Reusable processes provide a foundation for users to come up to speed quickly, adding value and a sense of direction for each project. Also, this process improves team communication and eliminates many of the negative effects of developing and testing systems. Testing should be done with a holistic view to be successful (Chillarege 1999). Establishing a plan and using a test-as-you-go approach is not a silver bullet. It does not solve many development problems, especially the ones coming from projects under extreme time constraints, but it could give future software projects a real opportunity not to fail (Hayes 1997). With the test plan available, testing is organized, tests are run, and the results are analyzed. When the test effort is complete: the results are documented; discrepancies between the plan and the actual implementation should be identified; and how those discrepancies were handled are documented. Then prepare for the next successful project.

From this one doomed project, this project manager has learned a few lessons. Without documentation of requirements upon which to build a solid test plan, a project has minimal chance for success. She assumed that the testing process for the initial project release was satisfactory without knowing what testing methodology was being used, and never asked. Prior to accepting a project with a history, review all documents. In the future she will be wary of taking on any project that does not have a test plan. If there is no test plan available, create one — regardless of the time constraints. This Project Manager has come to believe, as an old proverb states, “If you fail to plan — plan to fail.”

References

Project Management Institute. 2000. A Guide to the Project Management Body of Knowledge (PMBOK® Guide) - 2000 edition. Newtown Square, PA: Project Management Institute.

Abbott, B. 2001, March 5. Test Center: Requirements Set the Mark. InfoWorld [Online]. Available: www.elibrary.com [2001, February 28].

An Introduction to Software Testing. 1996, September 20. Information Processing Limited (IPL). [Online]. Available: http://www.Teleport.com/~qcs/papers/p820.htm [2001, March 24].

Chillarege, R. 1999, April 26. Software Testing Best Practices. IBM Research—Technical Report, RC 21457 Log 96856, p. 1–11.

Connolly, P. J. 2000, September 18. Building better apps for your business via thorough testing. InfoWorld, 22 (38), p. 58.

Hayes, F. 1997, March 31. Test early and often. ComputerWorld, 31 (13), p. 56.

Hower, R. 2001, January 30. Software QA and Testing Resource Center—FAQ Part 1. Software QA and Testing Resource Center. [Online]. Available: http://www.softwareqatest.com/qatfaq1.html [2001, February 28].

Marick, B. 1997. Classic Testing Mistakes. Testing Foundations—Consulting in Software Testing. [OnLine]. Available: http://www.testing.com/writings/classic/mistakes.com [2001, February 28].

Perry, W. E. 2000. Effective Methods for Software Testing (2nd ed.). New York: John Wiley & Sons, Inc.

Software Testing Institute 2000.Test Plan Sample. [Online].Available: http://www.sqatester.com/documentation/testplansmpl.htm [2001, March 12].

Software Testing Newsletter. Summer/Fall, 1997 [Online]. Available: Software Testing Institute. http://www.ondaweb.com/sti/testplan.html [2001, March 12].

Test Plan Sample. [Online]. Available: SQATester.com 2000. http://www.sqatester.com/documentation/testplansmpl.htm [2001, March 12].

Proceedings of the Project Management Institute Annual Seminars & Symposium
November 1–10, 2001 • Nashville, Tenn., USA

Advertisement

Advertisement

Related Content

  • PMI White Paper

    Agile Regulation member content open

    By National Academy of Public Admiistration | PMI The National Academy of Public Administration recently presented the results of a year-long effort to identify the Grand Challenges in Public Administration.

  • Project Management Journal

    Crises and Coping Strategies in Megaprojects member content locked

    By Iftikhar, Rehab | Müller, Ralf | Ahola, Tuomas This study focuses on crises in megaprojects and on the strategies used to cope with them

  • A Roadmap to PMO Excellence member content open

    By Farid, Sam In the world of the PMO, strategic thinking is not enough—agility and adaptability are crucial for overall survival and sustainable growth.

  • PM Network

    China responde con un rugido member content open

    El repunte de China de una recesión inducida por una pandemia es real y está acelerando una explosión de actividad de proyectos.

  • PM Network

    A China ressurge com força member content open

    A recuperação da China de uma recessão induzida por pandemia é real - e está acelerando uma explosão de atividades de projeto.

Advertisement