Agile project management methods, such as Scrum, are based on a lightweight (or lean) process model and are intended to be augmented as necessary by specific teams for their specific project context. This is in contrast to more traditional project management methods, which may be rich with well-defined processes and deliverable templates and are intended to be pared down as necessary for a specific project (or PMO) context.
The basic Scrum process model includes product backlog management, sprint management, and release management. In classic (or “vanilla”) Scrum, the product backlog is used to evolve the product’s functional and non-functional requirements over the course of product development. Product testing is performed by the Scrum team during the sprint, and the Scrum team only demonstrates tested and peer-reviewed functionality at the end of a sprint. But classic Scrum does not define a formal requirements management process or test case management process that can be executed by organizations external to the Scrum team and outside the Scrum cycle.
The author proposes extensions to the Scrum process model to add formal requirements and test case management, with requirements and test case work items that have their own state transition (e.g. “Not Implemented”, “Partially Implemented”, “Fully Implemented”). The management of the requirements and test cases can be performed by individuals or teams that are only loosely coupled with the product development team (such as a product management team or a product assurance team).
This is agile. Testing is integrated into our iterative development process. We create tested, releasable code, each iteration, and we only release bug-free code. So, why do we need a separate testing team and bug-tracking process?
This was the opening comment made by our lead developer after we called the development/testing process meeting to order. The Product Assurance (PA) manager rolled his eyes. Positions were already well established prior to the meeting. The Product Development team wanted to be able track its product’s bugs as work items, just like development tasks in their development environment, and didn’t want to use a separate system to find and update information about bugs. The PA department wanted to track bugs for all the systems that are being released into production in a common PA testing environment, regardless of the development team, development method, or programming platform.
Furthermore, the PA department wanted to test the product against a formal set of product requirements, in a form they were comfortable with—a System Requirements Specification. But the Product Development team wanted to develop the product incrementally, with the product requirements in the forms of User Stories, which were physically written (or printed) on 3 x 5 cards and stuck to a whiteboard!
The Product Development team and the Product Assurance department used different terms for similar concepts (“bug” vs. “defect,” “user story vs. “requirement”), and the process that the development team used for resolving bugs (“New,” “Assigned,” “Resolved”) was different (and less complex) than the process model that the PA team used. There wasn’t even an agreement about the notion of “Release.”
The outcome of this series of meetings was a hybrid process model in which a formal set of product requirements and formal product assurance processes existed outside the agile development context and the Product Development team was able to follow their “pure” agile process.
Agile Product Development with Scrum
Scrum is a lean project management method that was developed by Jeff Sutherland and Ken Schwaber (Schwaber & Beedle, 2002). In Ken’s own words, “Scrum hangs all of its practices on an iterative, incremental process skeleton” (Schwaber, 2004, p. 5), which is shown in Exhibit 1:
Scrum, as described by Schwaber (Schwaber, 2004, pp 6–14), only consists of three roles (Scrum Master, Product Owner, and Team), three artifacts (Product Backlog, Sprint Backlog, and a Working—or “Potentially Shippable”—Increment Product Functionality), and three processes (Sprint Planning, Daily Scrum, and Sprint Review). Scrum practitioners refer to these core roles, artifacts, and processes as “Vanilla Scrum.” In the spirit of lean thinking (Poppendieck, 2003), the core roles, artifacts, and processes are required for Scrum to work, but additional roles, artifacts, and processes may be added (carefully) to suit the enterprise’s business or development context. This, however, introduces two risks:
- 1) Weighing down a lean method with additional processes and artifacts such that it is no longer lean.
- 2) Eliminating or changing the core roles, artifacts, or processes without understanding the impact on Scrum from a systems point-of-view.
Scrum is a project management method, not a software engineering method. Lean software engineering methods, such as XP (Beck, 1999) augment Scrum with practices such as colocation of the team, pair programming, test-driven development, continuous integration, and “stories” to describe elements of customer-visible functionality.
According to Beck, a story represents a concept and is not a detailed specification. “Hideous written detail is not necessary if the customer can work with the programmers while they are programming” (Beck, 2001, p. 46). The Manifesto for Agile Software Development (Beck et al., 2001) states:
We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on
the right, we value the items on the left more.
Beck elaborates in Planning Extreme Programming: “Stories should be written a few at a time. The programmers should sit down with the customer, [agree on] between two and five stories, and then stop. The programmers then estimate those stories. While the programmers are estimating, they will certainly want to talk to the customer about the details and issues. So the programmers and the customer are in constant communication.” (Beck, 2001, p. 51)
Cohn states that “story descriptions are traditionally handwritten on paper note cards… The Card may be the most visible manifestation of a user story, but it is not the most important… cards represent customer requirements rather than document them… while the card may contain the text of the story, the details are worked out in the Conversation and recorded in the Confirmation.” (Cohn, 2004, p. 4)
Agile development does not require, and even discourages, the development of a comprehensive, monolithic Software Requirements Specification in the traditional sense. Conversation with the customer is the means of elaborating requirements. The details of the requirements are then captured in the forms of acceptance tests and then code.
The primary output of an agile Product Development team is “potentially shippable” software. This implies that the Product Development team is responsible for testing as well as coding. The Agile Product Development team members perform a variety of tasks during a sprint, including design, coding, and testing. To ensure thorough testing, the ideal Product Development team has at least one member with expertise in agile testing to help the entire team perform testing with the same discipline that they design and code.
Crispin describes a four-quadrant approach to agile testing in Agile Testing (Crispin, 2009), as shown in Exhibit 2:
Crispin elaborates, “Our product teams need a wide range of expertise to cover all of the agile testing quadrants. Programmers should write the technology-facing tests that support programming, but they might need help at different times from testers, database designers, system administrators, and configuration specialists. Testers take primary charge of the business-facing tests in tandem with the customers, but programmers participate in designing and automating tests, while usability and other experts might be called in as needed. The fourth quadrant, with technology-facing tests that critique the product, may require more specialists. No matter what resources have to be brought in from outside the development team, the team is still responsible for getting all four quadrants of testing done.” (Crispin, 2009, p. 105)
An agile Product Development team that follows a disciplined approach to testing is capable of producing a fully tested, potentially shippable product.
Agile Product Development Process Model
Agile product development with Scrum is based on completing a set of User Stories within a time-box called a Sprint. At the start of the Sprint, the team breaks down the work to complete the User Stories into Tasks. Over time, Bugs are discovered, which also need to be fixed in the Sprints. A process model for this is shown in Exhibit 3:
This process model consists of four Work Items: Sprint, User Story, Task, and Bug.
A Sprint has zero or more User Stories. Although a Sprint cannot start without at least one story, a future Sprint can be defined before any stories are associated with it. Typically, a Sprint will initially just have a Start Date and End Date. During Sprint Planning, a Capacity is determined (typically measured in person-hours or person-days), then a Sprint Goal is established, and then one or more User Stories are associated with the Sprint.
The state of the Sprint will normally transition from Not Started to In Progress to Done. In some cases, a Sprint is Abnormally Terminated.
A User Story belongs to zero or one Sprint. (In this model, an instance of a User Story is assigned to only one Sprint. If a Story is not completed in the Sprint to which it was assigned, a new instance of the User Story is created in the Product Backlog so the story can be completed in a future Sprint.)
A User Story has zero or more Tasks. Similar to the Sprint, a User Story cannot start without at least one Task, but User Stories typically exist in the Product Backlog with no Tasks prior to being associated with a Sprint. During Sprint Planning, a User Story will be assigned Tasks by the team.
A User Story also fixes zero or more Bugs. In this model, a User Story is used to create a new function or to fix (typically one, but possibly multiple) bugs; so, bug-fixing stories will be associated with one (or more) bugs.
A User Story has attributes such as Title (e.g., “Delete Item from Shopping Cart”) and Description (e.g., “As an online shopper, I want to be able to delete items that I’ve added to my shopping cart so I can change my mind about purchasing an item prior to checking out.”). Most of the functional and non-functional details about the User Story are stated in terms of “Conditions of Acceptance” (or sometimes “Acceptance Tests”). The User Story is assigned a Business Priority by the Product Owner, and the team estimates the complexity of the User Story in Story Points, possibly using “Planning Poker” (Cohn, 2006, p. 56). During Sprint Planning, the team may sequence the User Stories by assigning a Delivery Order, although this is typically just a rough plan of attack for the Sprint and is not binding on the team.
In this model, all User Stories are required to have a Design Review (typically, a peer review within the team) before it is done and a Done Review (typically by the Product Owner) after it is done. In this model, the workflow rules prevent a User Story from transitioning to “Done” if either the Design Reviewed By or the Done Reviewed By attribute is null.
The state of the User Story will normally transition from Not Started to In Progress to Done. It transitions from Not Started to In Progress when the first Task that belongs to it transitions from Not Started to In Progress. A User Story transitions from In Progress to Done when all the Tasks that belong to the Story are Done, and the Design Reviewed By and Done Reviewed By attributes are populated. If all the tasks are Done, but either (or both) of the Reviewed By attributes are null, then the User Story transitions from In Progress to Review Pending.
A Task belongs to one and only one User Story (and cannot exist without a User Story). It has a Title and a Description. During Sprint Planning, the team will assign the Estimated Effort for the Task (typically in person-hours). The Task may be adopted by an Owner during Sprint Planning, or during the Sprint. For pair programming, the Assisted By attribute is used to keep track of the pair, although these attributes could also be named Owner1 and Owner2.
During the Sprint, the Owner updates the Work Remaining attribute at the end of each day that the Task is In Progress. This is required for the Sprint Burndown (Schwaber, 2004, pp 11–12). Some teams track Work Completed for Tasks as well, however many pure agile teams are adamantly opposed to tracking Work Completed at the task level—and it is not required for any Scrum artifact or process—so it is not shown in this model. In this model, however, the team determines whether a task requires a peer review during Sprint Planning. If Review Required is set to yes, the workflow rules prevent a Task from transitioning to Done unless Reviewed By is populated.
A Task typically transitions from Not Started to In Progress to Done. As noted above, it may transition to In Review if the work is done but the Review Required field is not populated. If the team determined that a task is not required to complete the User Story, it will be transitioned to Deleted (if the task is determined to be irrelevant) or Deferred (if possibly relevant—or “nice-to-do”—but not required to achieve the Conditions of Acceptance for this Story, in which case it will be considered in planning for a future Sprint).
Bugs are created by the team members or submitted by persons external to the team when the product is found to behave in a confusing or undesired manner. A User Story to fix a Bug is created during Sprint Planning if/when the team commits to fixing the Bug in a particular Sprint. The User Story will also have a Task to fix the Bug, which is used to burn down the work required to fix the Bug.
In this model, Bugs and User Stories to fix Bugs are not normally created to track problems that are found and fixed by the team within a Sprint, although this is possible, depending on whether there is value in tracking a particular Bug outside of the context of completing the Sprint. More typically, the Bug work item is used to track bugs that need to be considered for fixing in a future Sprint.
Bugs have a variety of attributes specific to defect tracking, such as Date Discovered, Steps to Reproduce, Reported By, and Verified By.
The state of a Bug normally transitions from New to Active (which indicates the Bug is associated with a User Story that has been assigned to a Sprint) to Resolved (which indicates the User Story that fixes the Bug is Done) to Closed (which indicates that someone external to the team—possibly the person who reported the Bug—has verified that the Bug is resolved). It is possible that a Bug can transition from New or Active to Rejected if the Bug cannot be reproduced, or the system is determined to be working as designed, or a variety of other reasons.
Some enterprises have a separate Bug (or Defect) tracking system, which is distinct from the Development tracking system. In this case, it may be necessary, or desirable, to track Bugs in both systems and have an automated process for synchronizing the information in both systems. In this case, it is only necessary to shadow the subset of the attributes that are of interest to the developers that fix a Bug (e.g., Steps to Reproduce) in the Development system and track the remaining attributes (e.g., Date Discovered, Reported By, Verified By) in the Defect tracking system. If the external Defect tracking system has a specific workflow/state transition, it is reasonable—probably preferable—to use the same state transition model for both systems; otherwise, a mapping is required.
Hybrid Product Development Process Model
Some enterprises are willing to release a product to a customer or end-user based on this agile process model, where the requirements are the User Stories and the testing is performed by the agile team during Sprints. However, many enterprises have a Testing (or Product Assurance) group, which is distinct from the Product Development group and will be testing a static version of the product against a static set of requirements irrespective of the development method and workflow used by the Product Development group.
Furthermore, some enterprises (e.g., a government supplier) may be creating a product based on a formal set of product specifications or requirements—which are not organized into User Stories that can be completed within the timeframe of a single Sprint. In this case, the product has to be tested against the formal requirement prior to being released to the customer.
To satisfy this business context, a hybrid process model is proposed in which requirements are developed and cataloged by a distinct Product Requirements team using a traditional (or non-agile) process. The product is developed, internally tested, and released by a distinct Product Development team using an agile process, specifically Scrum. After release, the Product is externally tested against the Requirements by a distinct Product Assurance team, using a traditional (or non-agile) product assurance process. The fixing of bugs reported by the Product Assurance team is performed by the Product Development team using the agile development process.
The Product Requirements team documents the Product Requirements in a System Requirements Specification (SRS), and/or catalogs the individual requirements in an external Requirements Management system, such as IBM DOORS. The requirements are not written in User Story format, and are not created for iterative, incremental development; hence, they cannot necessarily be implemented in the timeframe of a single Sprint.
To fit the agile development process, the agile Product Development team (specifically, the Product Owner) will create User Stories from the Product Requirements, which can be completed within single Sprints, thereby supporting iterative, incremental development.
The Product Assurance team will test the product against the Product Specifications outside of the Scrum process. The Product Assurance team expects a static set of Project Requirements (e.g. the SRS) from the Product
Requirements team and a stable (i.e., tested and non-changing) Product released by Product Development team. The Product Assurance team will develop structured test cases from the Product Specifications and execute them against the Product, generating Bugs that will be reported to the Product Development team for resolution.
The Product Development team will then resolve the Bugs within their Scrum process.
The hybrid process model that supports this is shown in Exhibit 4:
In this model, the Sprint, User Story, Task and Bug work items exist exactly as they do in the Pure Scrum Process Model. This is because the Product Development team follows exactly the same processes. However, two new work items, Requirement and Test Case, are introduced and they are managed outside the Scrum process.
A Requirement (functional or non-functional) consists of attributes such as Title, Description, and Conditions of Acceptance. If the requirements are imported from a external system (such as IBM DOORS), then the Requirement work item in the development management system will correspond to a requirement in the external system, and may only contain the subset of attributes of interest to the Product Development team.
Requirements are associated with User Stories in a many-to-many relationship. A Requirement is implemented by one or more User Stories (thus supporting incremental development of a requirement). Also, a User Story may implement one or more Requirements.
Requirements are associated with Test Cases, also in a many-to-many relationship. A Requirement is tested by one or more Test Cases, and a Test Case tests one or more Requirements.
Requirements begin in the Not Implemented state. When the first User Story that implements a Requirement is Done, the Requirement transitions to Partially Implemented. When all of the User Stories that implement a Requirement are Done, the Requirement transitions to Fully Implemented. If a Requirement is implemented by a single User Story, then it transitions directly from Not Implemented to Fully Implemented. If a User Story is subsequently associated with a Requirement that is already Fully Implemented, the Requirement reverts to Partially Implemented until the new User Story is Done.
If a Requirement is associated with one or more Test Cases, a Requirement transitions from Fully Implemented to Tests Passing when all the associated Test Cases are Passing. If Test Cases are subsequently associated with a Requirement that is already in Tests Passing state, the Requirement reverts to Fully Implemented until the new Test Cases are Passing.
Requirements that are Fully Implemented are ready for customer acceptance. This (optional) transition is especially helpful for tracking complex product development for a government acquisition, where the customer may review the product on a Requirement by Requirement basis ensuring each Requirement is met, in which case, each Requirement will be transitioned to Accepted individually.
A Test Case consists of attributes such as Title, Description, and Test Steps. If the Test Cases are imported from a external system (such as HP Quality Center), then the Test Cases work item in the development management system will correspond to a Test Case in the external system, and may only contain the subset of attributes of interest to the Product Development team.
As noted above, Test Cases are associated with Requirements in a many-to-many relationship. A Test Case tests one or more Requirements, and Requirement is tested by one or more Test Cases.
Test Cases are also associated with Bugs, also in a many-to-many relationship. A Test Case identifies one or more Bugs, and a Bug is identified by one or more Test Cases.
Test Cases begin in the Not Started state. When development of a Test Case begins, the Test Case transitions to In Development. When development of a Test Case is complete, the Test Case transitions to Ready for Testing. The Test Case will remain in Ready for Testing state until there are associated Requirements that are Fully Implemented (all User Stories Done). The Test Case will then transition to Failing if any of the Test Steps fail. When all the Test Steps are passing, then the Test Case transitions to Passing.
As noted above, Test Cases that fail identify Bugs; however the state of a Test Case is not tied to the state of associated Bugs. Rather, the state of the Test Case is only determined by the Test Steps.
This process model makes it straightforward to implement some interesting Requirements Traceability reports. Exhibit 5 shows a query implemented in Microsoft Visual Studio 2010 with a Team Foundation Server called “Requirements Completion through Test.” Three Requirements are shown that are implemented by two User Stories and are tested by one Test Case. In this example, the User Stories are Done and the Test Cases are Passing, so the Requirements are in the Tests Passing state, awaiting review and acceptance by the customer:
Exception reports, such as “Requirements with Failing Test Cases,” “Requirements without Test Cases,” and “Requirements without User Stories” are also easily implemented, as shown in Exhibits 6, 7, and 8:
In summary, a hybrid process model has been presented, which supports a Product Development team developing a Product with an iterative, incremental and agile development process, such as Scrum. The User Stories are derived from a traditional set of Product Requirements that were developed by an external Product Requirements team outside the Scrum process. The Product is released to an external Product Assurance team that tests the Product against the Product Requirements, again outside the Scrum process, and generates Bugs which are resolved by the Product Development team in the Scrum process, as shown in Exhibit 9: