Testing involves more than simply discovering bugs. Testing helps discover the causes of errors and eliminating them. Testing helps make explicit the assumptions and requirements that customers have without getting overly technical. Testing ensures code integrity and compatibility with other code modules.
Testing minimizes the risks for error caused by humans, machines, and environment.
Planning the testing strategy should also begin as early as possible, as described by the Develop Test Strategy process goal.
This requires careful consideration: what will be tested, how much can be automated and what must still be done manually, what the testing environment(s) will include, who will do testing, what they need to know, and what standards they will follow.
Acceptance testing helps the team answer the question, “Does the code do the right thing?” Acceptance testing is required by agile/lean.
Acceptance tests come from the point of view of the user of the system. Factors to consider include:
- Up-front acceptance test specifications help the understanding of the requirement.
- Acceptance testing can be used to integrate development and test, increasing the efficiency of development.
- Acceptance testing can be used to improve the process being used to develop the system by looking at how to avoid errors from occurring in the first place.
- Automating acceptance tests are very likely to lower regression costs.
Acceptance tests are as much about the understanding and definition of the requirement as they are about validating their implementation. When writing requirements ask the question, “How will I know when this-and-that aspect of this requirement is satisfied?”. This is something that invites answers in the form of examples – which can (and should) become test cases.
This process of having an abstract statement (the requirement) and a corresponding concrete example (the test case) improves the conversation between developers and analysts and testers and customers.
Every feature and every story should have acceptance tests. These tests can be manual or automated.
Automated testing is the use of software to verify and validate the behavior and qualities of the software under test. It involves software to control the execution of tests, to compare actual outcomes with expected outcomes, to set up test preconditions, and to report results. Usually, test automation involves automating manual testing processes that are already being used.
Planning for test automation should happen at the earliest stages and should consider all levels of testing: unit, integration, system, system integration, and functional/acceptance.
As a general rule, all tests, including related setup, configuration, and evaluation steps, which can be automated, should be automated with priority given to aspects which are the most time consuming, error-prone, and tedious.
Common automated test frameworks include xUnit-based approaches (CPPUnit, JUnit, PyUnit, NUnit), Boost test, FIT, and Cucumber.
When selecting an automated testing framework, consider the following:
- Adequate range of available interfaces: GUI-driven, an interface friendly to automation via scripts and command-lines
- Software dependencies and system requirements
- Procurement difficulty and cost
- Community support
- Feature mismatch between automatable and manual interfaces
- Available API library
- Test data post-processing
- Fit in the overall agile life cycle tool suite
- Report generation
It is important to understand that the automated tests are really part of the overall software solution being developed, and test automation needs to be engineered together with the system under test. For example, besides code for the tests themselves, test automation requires software code to connect the tests with the software under test, integrations with adjacent systems need to be replaced by test-specific stubs that emulate those systems.
Following this, automated tests should also evolve iteratively and incrementally over the course of development, evolving together with the software under test.
Practices for the Developer
- Controlling work-in-process (WIP)
- Daily coordination
- Decomposing a feature into stories
- Design patterns
- Issues of quality
- Iteration demonstration and review
- Iteration planning meeting
- Iteration retrospective
- Responsibilities and practices
- Unfinished work
- Visual controls
- Writing tasks