Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.
Want to start an argument in the testing community? There are plenty of ways to do it, but one it to start talking about test cases. What are they? Should we use them? How should we use them? The arguments can go on and on. The reality is there is a lot to think about when it comes to effective test documentation and the discussions around test cases really plays into this a lot. Often the purposes of test cases are seen as showing what work was done and enabling us to go back and check that the code has not regressed. When thinking about test documentation we need to think about what we are trying to achieve with it, and both of those purposes are important.
Re-running Tests
The question is how do we most effectively achieve them? Do we need to go back and be able to repeat every test we’ve done? Heck no. Think about it. Let’s say you spend 20 hours one week testing the product, and let’s say those tests are recorded in a way that let’s you go back and re-run them. What happens next week? You do 20 more hours of new testing and you run the previous 20 hours of testing – your week is full. OK so what about the next week? now you have 40 hours of old testing to get through. Clearly you will not do it all, and the more new testing you add, the less you will be able to do. Taking the thought process to it extreme conclusion shows us that you cannot reasonably expect to be able to repeat every test you do. If this is the case, does it make sense to do the overhead of detailing out tests in a way that makes them repeatable? Nope. So when it comes down to it, we try to record in a detailed manner only those tests that we know we will want to repeat multiple times, and for us that is done in automated regression tests. Yes, I would consider automation scripts to be test documentation. They record (document) that testing done, how could they not be?
Demonstrating Coverage
So what about the other part of the equation? What do I do to show and record what work was done? I think I’ve mentioned it before, but I primarily use light weight documentation of the work I did. A few bullet points that show what I hit on and why it was interesting to do that. Some checklists of ideas that were considered. Notes from discussions with teammates on what the feature does. The documentation doesn’t need to include a lot. It needs to include enough for me to have an intelligent conversation about it in the future if I’m asked and enough to convince myself and others that I have sufficiently tested the product.
Improvements
Nobody is perfect and my test documentation is no exception. What changes could I do to make it better? I like the way I’m currently doing thing and it seems to fit in well with the context I’m working in so I wouldn’t make any major changes at this point. I think if I were to make any tweaks it would be towards rolling things up. How do I better roll up and summarize what I have done so that it is more accessible to others? Sometimes my notes are written in a way that I understand but are cryptic to others, so I will continue to experiment with small changes to improve on what I’m doing.
How about you? How do you document your testing?