It’s very much about unit tests, but it’s a fascinating idea. Mutate the code under test, to ensure that some test fails if any logic is modified. If it works this would be much more effective than code coverage…]]>
The problem I am trying to describe is with tests that are written incorrectly and therefore do not go “red” when they should. This is not something you will pick up by looking at coverage because there is a test to cover the functionality. However the test is written incorrectly and therefore does not fail when it would be expected to. Just as people make mistakes when writing code (also known as bugs) people make mistakes when writing tests. To find the bugs in the tests you need to test them. Bugs in tests are generally not that the test will never pass, instead the bugs are that the test passes when it should fail.
If the the test haven’t covered some case, it is coverage problem more.]]>
Isn’t this more of a case of thinking about what the right test should be? In the case above the 2 and 2 is a bad test compared to 2 and 3 (or anything else that could easily yield the correct result with an easily fudged implementation)?
I’ve dived into TDD in the last few weeks and I’m struggling with the paradigm from the point of view of what is the right test to write and what level of abstraction do I write it at. I think that’s pretty normal though…and I agree. It is satisying when a test suddenly fails.]]>