As the project is progressing, pressure mounts for developers to “just code” and stop all the other work, like automated testing. I have heard the following comment many times “automated testing is a testers responsibility”. The whole point of automated tests is to help the team develop better quality code faster. If that is not the case, then the team should stop doing the automated tests. On the other hand, if it is helping the team then there should be no argument to stop, as stopping would slow the team down or reduce the quality of the application.
As the automated tests help the team, then it is the teams responsibility. Not testers, not developers, not BAs but the team who owns functional automated tests, just like the team is responsible for delivering a working application that satisfies the customer. There are tools available (TWIST is one that jumps to mind, but FIT/Fitness and Concordion are others I know) which enable all team members to contribute to functional automated testing whether they are technical or not. I have recently worked on a project where the BAs created requirements straight into Concordion, the developers then used those to create methods to execute the automated tests and the testers (there was only a couple) helped flesh out the tests being done and had some hand in maintaining the tests, for example identifying the cause of a failure and linking new test cases to existing methods.
A team should be made up of people with the right set of skills, not the right titles.
You can test code to make sure it works as specified by the requirements, which in agile is commonly known as acceptance testing. This type of testing places a lot of faith on those that created the requirements in getting them correct. Although there is talent in creating the acceptance criteria there is little skill involved in checking that an application does what it has been specified to do, and I believe there is a misconception that this is mostly what testers do (and for some testers all they need to do). That is one reason why agile projects will try and automate these tests.
Everybody makes mistakes, even the customer or business analysts will make mistakes, so who tests the requirements? Often this is left to user testing, which in some cases only occurs for real users once the application is released. Requirements testing is where I believe testers on an agile project can make a big contribution. With enough domain knowledge it is possible to perform exploratory testing to help understand what the system does and does not do and find gaps in the requirements. An example of a gap in requirements I have come across frequently is around importing and exporting. Both are captured as separate stories the first is the ability to import a certain format. The second is to export so another system can then import the information. After implementing both stories it is not possible to import what is exported, which is often a requirement but never stated.
To help enhance and target exploratory testing to find gaps in requirements I like to do the following:
- Goal directed testing. With other team members (ba’s, customers, real users, developers), come up with goals that a user would want to achieve with the system and then see if you can do it and understand how easy and intuitive it is to accomplish. A goal would not be “I want to log in”, but instead it should focus on what a user is wanting to do in a session. For example “check my account balance and if I have enough money pay a bill”.
- Persona based testing. In most cases there are different types of people that will be using the system. Persona’s are often used by analysts to help understand who the users of the system are, what traits they have and what they would expect the system to do. If you don’t have access to real users, then using persona’s in conjunction with goal directed testing above is an excellent way of testing requirements by looking at the application as if you were that person.