I get great satisfaction when I finally get an automated test to successfully run and pass, but I get more satisfaction when a passing test fails and picks up a defect. When automated tests are created they are normally created to pass and then used as a safety net to detect when a system stops behaving as it should. In the case of TDD, the tests will fail first, but start passing once the code has been developed. The team starts to put a lot of trust in the tests and an area that has a lot of automated tests which are all passing will be considered to have good quality, but how confident can you be that the tests will detect a failure if it occurs.
For example, I have a test for an add functionality which has 2 and 2 as inputs and expects 4 as a result. But if the code is written to do a multiply instead of an add the test would still pass!
One way to improve your confidence in your tests is to test them. You can do this by injecting failures into the system to test the tests and verify that they will detect failures you think they should detect. In some instances this can be done by changing the code under test or the system running the application to inject the type of failures you expect the test to detect. You may be able to change some data being used by the system or disconnect an interface to simulate the failure condition. If you cannot change the system under test then look at changing the test itself. So in the previous example, change the input from 2 and 2 to 2 and 3 with the expected result being 5. This would have picked up the difference between an add and multiply.
Automated tests are code, and with any code bugs can exist. Likely defects that occur in automated tests are that they will not detect a defect in the system that the test is expected to find. By testing your tests you have a better understanding of the types of defects the tests will detect. You can use this technique to help refactoring test code to make sure the refactoring has not had any adverse effects.