As the automated tests help the team, then it is the teams responsibility. Not testers, not developers, not BAs but the team who owns functional automated tests, just like the team is responsible for delivering a working application that satisfies the customer. There are tools available (TWIST is one that jumps to mind, but FIT/Fitness and Concordion are others I know) which enable all team members to contribute to functional automated testing whether they are technical or not. I have recently worked on a project where the BAs created requirements straight into Concordion, the developers then used those to create methods to execute the automated tests and the testers (there was only a couple) helped flesh out the tests being done and had some hand in maintaining the tests, for example identifying the cause of a failure and linking new test cases to existing methods.
A team should be made up of people with the right set of skills, not the right titles.]]>
You can test code to make sure it works as specified by the requirements, which in agile is commonly known as acceptance testing. This type of testing places a lot of faith on those that created the requirements in getting them correct. Although there is talent in creating the acceptance criteria there is little skill involved in checking that an application does what it has been specified to do, and I believe there is a misconception that this is mostly what testers do (and for some testers all they need to do). That is one reason why agile projects will try and automate these tests.
Everybody makes mistakes, even the customer or business analysts will make mistakes, so who tests the requirements? Often this is left to user testing, which in some cases only occurs for real users once the application is released. Requirements testing is where I believe testers on an agile project can make a big contribution. With enough domain knowledge it is possible to perform exploratory testing to help understand what the system does and does not do and find gaps in the requirements. An example of a gap in requirements I have come across frequently is around importing and exporting. Both are captured as separate stories the first is the ability to import a certain format. The second is to export so another system can then import the information. After implementing both stories it is not possible to import what is exported, which is often a requirement but never stated.
To help enhance and target exploratory testing to find gaps in requirements I like to do the following:
Run Test Suite From Quality Center is a copy of a “generic” vbscript, you will need to adjust it to your environment.]]>
With my 2008 training budget I currently plan on taking a Rapid Software Testing course, attend CITCON and STANZ conferences and I am looking into a course on writing for the web.]]>
People tend to use number of test cases, ratio of passed v failed test cases, number of defects found per test case or similar metrics as measures of how good a test strategy is.
Although these metrics can give some indication, to my mind the best metric to measure the effectiveness of all your testing effort is how many unknown issues are found in production. By unknown issues I am referring to issues that were not detected prior to go live. There is often debate about how long you measure issues in production and how subjective the measure is, is it a defect or a new requirement. But in the end it doesn’t matter. If the customers are unhappy and complaining about unknown issues then it is an indication the test strategy could be improved. Defects are obvious, testing could likely pick them up, but missing requirements can also be detected by the testing effort, especially UAT. (more…)]]>
I get great satisfaction when I finally get an automated test to successfully run and pass, but I get more satisfaction when a passing test fails and picks up a defect. When automated tests are created they are normally created to pass and then used as a safety net to detect when a system stops behaving as it should. In the case of TDD, the tests will fail first, but start passing once the code has been developed. The team starts to put a lot of trust in the tests and an area that has a lot of automated tests which are all passing will be considered to have good quality, but how confident can you be that the tests will detect a failure if it occurs. (more…)]]>
On an Agile project defects are fixed as they are found, so no story should be considered complete if there are outstanding defects with it. But this does not mean that defects do not persist. A defect may be found that is not related to a story in the current iteration, or the customer may decide that the defect on a story is not worth the effort it would take to fix it and would prefer to focus on another story.
In this way a team can start to accumulate defects, which I classify as defect debt. (more…)]]>
Recently I was telling somebody that testing will not improve quality, it is what you do with the information that testing provides that will improve quality. They then asked, can you improve quality without testing? I found this an interesting question. I decomposed this into the following questions. Can you improve the quality of software testing without getting any information? and if testing is about providing information, does that mean that anything which provides information would be considered testing? I think you need to answer the second to prove the first is true. Breaking testing up into dynamic (requires the code to be executed) and static (testing the code without executing it) testing I believe the most controversial would be static testing. Is pair programming static testing? What about acceptance criteria creation, talking about the design of a system on the whiteboard or even a standup meeting. These all provide information but I don’t think they should all be called testing. (more…)]]>