I was working on a script to use for an automated test and one of the checks in it was comparing two similar things with different inputs. I decided to change one of them to make the difference between them more obvious, but to my surprise, when I ran the test everything passed. What gives? I had made a pretty significant change to the test – it should be failing. I tried out some stuff interactively and it all looked ok, so I ran the test again – Still passing.
Puzzled I reviewed the test again. Everything looked fine, but clearly something was going wrong. I started adding some debug output into the tests and after a couple of tries I found out that I had accidentally switched the order of some commands. I was checking the output before updating the engine. A simple switch later and everything was as I expected.
This is just a normal story in the life of a tester. I’m sure anyone who has written automation scripts can relate to this, but let’s take a minute to think about the implications of it. What if I hadn’t tried that change? I would have just merged in the test and it would have been passing but telling us nothing. The passing state of the test was due to a bug in the test and not due to the code we were trying to check. The code could have changed quite dramatically and this test would have happily kept on reporting green.
I”ll just get straight to the point here. Your tests are only as good as you make them and since you are a human, you are going to have bugs in them sometimes. A good rule of thumb is to not trust a test that has never failed. I usually try to deliberately do something that I expect to cause a failure just to make sure my test is doing what I expect it to be doing. Try it yourself. You might be surprised at how often you find bugs in your tests just by doing this simple action.