Acceptance test-driven development

better software through collaboration


Acceptance test-driven development (ATDD) helps with communication between the business customers, the developers, and the testers. This paper introduces the process of acceptance testing. The 5 W's are covered: What are acceptance tests? When they should be created, why you should use them, who creates them, and where they are used. Acceptance test-driven development makes the implementation process much more effective. This material is adopted from Lean-Agile Acceptance Test-Driven Development: Better Software through Collaboration (Pugh, 2011).

What Are Acceptance Tests?

Acceptance tests are from the user's point of view — the external view of the system. They examine externally visible effects, such as specifying the correct output of a system given a particular input, as shown in Exhibit 1. Acceptance tests can show how the state of something changes, such as an order that goes from “paid” to “shipped.” They also can specify the interactions with interfaces of other systems. In general, they are implementation independent, although automation of them may not be. In addition, acceptance tests can suggest that operations normally hidden from direct testing (such as business rules) should be exposed for testing.

What are acceptance tests?

Exhibit 1 - What are acceptance tests?

Why Do ATDD?

Teams that follow the acceptance test-driven process have experienced efficiency gains. In one case, rework went down from 60% to 20%. This meant that productivity doubled because the time available for developing new features went from 40% to 80%. In another case, the workflows were working the first time the system was brought up. Getting the business rules right, as in the following example, prevents rework. Because the business customer, developer, and tester are involved in acceptance test creation, there is tighter cross-functional team integration. In addition, passing the acceptance tests visibly demonstrates that the story is complete.

When Are Acceptance Tests Created?

The value stream map for classical development is shown in Exhibit 2. After eliciting requirements, they are analyzed. A design is created and code developed. Then the system is tested. You will notice many loops going back from test to analysis, design, and coding. These loop backs cause delay and loss of productivity. Why do these occur?

Frequently, the cause is misconstructions; in particular, it is misunderstanding the requirements. The loop backs are really feedback to correct these mistakes. There will always be a need for feedback, but quick feedback is better than slow feedback.

Classical software development

Exhibit 2 - Classical software development.

As you will notice in the revised value stream map in Exhibit 3, the acceptance tests are created when the requirements are analyzed. The developers then code using the acceptance tests. A failing test provides quick feedback that the implementation is not meeting the requirements.

Software development with acceptance tests

Exhibit 3 - Software development with acceptance tests.

Who Authors the Tests?

The tests are authored by the triad of customer, tester, and developer. At least one example used in the tests should be created by the customer working with the tester and developer. The tester and developer can then create more examples and have them reviewed by the customer.

The developers connect the test to the system by writing short bits of code. Anyone — customers, developers, or testers — can run the tests. Acceptance tests can be manual. However, automating acceptance tests allows them to run as regression tests to ensure that new features do not interfere with previously developed features.

Acceptance tests are not a substitute for interactive communication between the members of the triad; however, they provide focus for that communication. The tests are specified in business domain terms. The terms then form a common language that is shared between the customers, developers, and testers.

How Do Acceptance Tests Fit Into the Overall Testing Strategy?

Acceptance tests are only a part of the overall testing strategy, as shown in Exhibit 4, a diagram adapted from Gerard Meszaros (Meszaros, 2007). They are the customer tests that demonstrate the business intent of a system, as shown in the upper left box in the following figure. The component tests, which are beneath them, are technical acceptance tests developed by the architect that help specify the behavior of large modules. Unit tests, the lowest left box, are partially derived from acceptance and component tests, and help the developer to create easy-to-maintain code. The tests on the right — usability, exploratory, and property — examine what are often termed the non-functional aspects. They also need to pass to have a high quality system.

Testing strategies

Exhibit 4 - Testing strategies.

Acceptance Test Example

Suppose you had a requirement, which states:

As the marketing manager, I want to give discounts to repeat customers so that I can increase repeat business.

There is one detail for this story. The customer discount is computed according to the business rule, Customer Discount Rule. The details of this rule are:

If the Customer Rating is Good and the Order Total is less than or equal US$10.00,

Then do not give a discount; otherwise give a 1% discount.

If the Customer Rating is Excellent,

Then give a discount of 1% for any order.

If the Order Total is greater than US$50.00,

Then give a discount of 5%.

Now read the rule again and answer this question: For a customer whose Customer Rating is Good and has an order of US$50.01, what should the discount be?

Depending on how you read the rule, you may come up with 1%, 5%, or 6%. The rule is ambiguous. How do we make it clearer? The customer, developer, and tester come up with some examples, which turn into tests.

Suppose they come up with a table of examples, as shown in Exhibit 5. In the third set of values, the discount for the case in question should be 1%. That's what the business customer wanted. Imagine if the customer had not been consulted and if both the tester and developer had thought it should be 6%.

Now these examples are used as acceptance tests. These tests and the requirement are tied together. The tests help clarify the requirement and the requirement forms the context for the tests.


Exhibit 5 – Examples.

Where Acceptance Tests Are Implemented

There are at least four ways to implement the tests. They are a testing script, which uses the user interface; a graphical or command line interface; a unit testing framework; and an acceptance testing framework. Let's take a brief look at each case.

In the first case, the tester creates a testing script. For example, he or she logs on as a Good customer, starts up an order, and puts items into it. When the order total is US$10.01, he or she completes the order and makes sure that it shows a US$0.10 discount. Now, he or she repeats the process for the five other cases, increasing the possibility of carpal tunnel syndrome.

This script needs to be run at least once in order to ensure the discount is computed properly as part of the workflow; however, there are three other ways to check for the other cases.

A graphical or command line interface could be created that accessed the module that computed the discount, as shown in Exhibit 6. The tester need only enter the customer rating and order total to determine if the discount is correctly computed.

Example accessing internal module

Exhibit 6 – Example accessing internal module.

The developer could create an xUnit test, as shown below. This automates the testing process; however, because the test is in the language of the developer, it can be more difficult to use as a communication vehicle and to ensure that changes in the business rule have been incorporated into the test.

class TestCase {
 testDiscountPercentageForCustomer() {
      SomeClass o = new SomeClass();
 assertEquals(0, o.computeDiscount(10.0, Good));
 assertEquals(1, o.computeDiscount(10.01, Good));
 assertEquals(1, o.computeDiscount(50.01, Good));
 assertEquals(1, o.computeDiscount(.01, Excellent));
 assertEquals(1, o.computeDiscount(50.0, Excellent));
 assertEquals(5, o.computeDiscount(50.01, Excellent));

One could use an acceptance test framework, which allows the tests to be readable by the customer. Exhibit 7 shows a table from Fit (“Framework for Integrated Testing”) but other frameworks such as Cucumber and Robot Framework have similar tables.

Example from FIT

Exhibit 7 – Example from FIT.

The table becomes a test and is tied to the underlying system through glue code called a fixture. When the test is run, the results appear in the table — green is a pass, red is a fail.

Anatomy of a Test

The anatomy of an acceptance test is shown in Exhibit 8. A test consists of three parts – the setup (given), the trigger (when), and the assert part (then). The setup specifies the initial state of a system. The trigger is the action or event that occurs. The assert part indicates the expected results. A test passes if the actual results match the expected result; otherwise, it fails. A discount computation test previously described could be expressed in this form as:


An Order total of 10.01

And a Customer rating of Good


The discount percentage is calculated


The percentage should be 1%.

Anatomy of an acceptance test

Exhibit 8 – Anatomy of an acceptance test.

This given-when-then format is used on larger tests, thus checking the discount for an entire order:


A customer who has a Customer Rating of Good


The customer places an order with a total of $10.01


The discount amount should be $0.10

Other Uses for Acceptance Tests

In addition to verifying requirements, acceptance tests can be used for other purposes. The number and complexity of the tests can help to estimate the relative effort required to implement a requirement. In the example test, there are six combinations. If there were 36 combinations, this would indicate that the effort to realize it would relatively greater.

The number of acceptance tests that pass relative to the total number of acceptance tests is a relative indicator of how complete the implementation is. In the example, if only one test passed, then the implementation is just started. If all but one test passes, then it is closer to completion.

ATDD from the Start

The acceptance test process actually begins at the definition of a feature or capability. For example, the user story about offering discounts is part of a marketing initiative. There is a purpose in offering discounts — to increase repeat business. How do you measure the effectiveness of the discount? You need to create an acceptance test, such as “During the next year, 80% of customers will have placed an additional order over their average orders of the past three years.” Often acceptance tests such as this one are termed project objectives. Objectives should be SMART: specific, measurable, achievable, relevant, and time-boxed.

If this acceptance test passes, then the feature is a success; that is, as long as there are no additional business features being added that might affect the outcome, such as providing a personal shopper for every Excellent customer. If the acceptance test fails, it may be due to a variety of reasons such as an insufficient discount or a competitor's discount. Or it may be that the objective is not achievable. For example, the economy is such that customers are not buying. In either case, you have a definitive measurement that suggests a course of action such as increasing the discount or abandoning the feature.


Acceptance tests represent the detailed requirements for a system. They are developed by the triad of customer, developer, and tester working together as part of the requirement definition. They are used by a developer and testers during implementation to verify the system. Using acceptance tests can double the efficiency in producing new features.


Pugh, K. (2011). Lean-agile acceptance test-driven development: Better software through collaboration. Boston: Addison-Wesley.

Meszaros, G. (2007). xUnit test patterns: Refactoring test code. Boston: Addison-Wesley.

© 2012, Kenneth Pugh, Net Objectives Inc.
Originally published as a part of 2012 PMI Global Congress Proceedings – Vancouver, BC, Canada



Related Content

  • Project Management Journal

    Getting Past the Editor's Desk member content locked

    By Klein, Gary | Müller, Ralf To reach acceptance, every research paper submitted to Project Management Journal® (PMJ) must pass several hurdles. This editorial aims to declare the editorial process and reveal major reasons for…

  • Project Management Journal

    Narratives of Project Risk Management member content locked

    By Green, Stuart D. | Dikmen, Irem The dominant narrative of project risk management pays homage to scientific rationality while conceptualizing risk as objective fact.

  • Project Management Journal

    Coordinating Lifesaving Product Development Projects with no Preestablished Organizational Governance Structure member content locked

    By Leme Barbosa, Ana Paula Paes | Figueiredo Facin, Ana Lucia | Sergio Salerno, Mario | Simões Freitas, Jonathan | Carelli Reis, Marina | Paz Lasmar, Tiago We employed a longitudinal, grounded theory approach to investigate the management of an innovative product developed in the context of a life-or-death global emergency.

  • Project Management Journal

    Investigating the Dynamics of Engineering Design Rework for a Complex Aircraft Development Project member content locked

    By Souza de Melo, Érika | Vieira, Darli | Bredillet, Christophe The purpose of this research is to evaluate the dynamics of EDR that negatively impacts the performance of complex PDPs and to suggest actions to overcome those problems.

  • Project Management Journal

    Navigating Tensions to Create Value member content locked

    By Farid, Parinaz | Waldorff, Susanne Boche This article employs institutional logics to explore the change program–organizational context interface, and investigates how program management actors navigate the interface to create value.