When Automation gets Boring

It has been said that a good heuristic for when to automate it when you are bored.  So then, I have a question:  what do you do when your automation bores you?  I’ve spent years working on and tweaking our automated regression system to that point that it runs very smoothly.  There are occasional glitches and breakages, but in general it is a very reliable system.  It fails when there are problems (most of the time) and it passes when things are ok (most of the time).  Adding new tests is easy. Sometimes I create one off tests that are not intended to be re-run directly in the automation framework just because of the useful debug and analysis tools we have built into our automation framework.

There is very little interesting work to do anymore, so what do we do?  Do we keep adding tests to the system and let ourselves get into a comfortable rut?  Add tests.  Occasionally remove an out of date test.  Clean up some test scripts for a development change.  Debug a failing test.  Rinse and repeat.  Sounds easy and comfortable, but it is also boring.  Where is the challenge? If boredom is good heuristic for automating, maybe I need to automate some of this. There are a couple of ways I can think of to automate the automation:

  1. I could increase/improve the workflows around automatically updating test script cleanup or debugging.  However, most of the low hanging fruit has already been done in this area, so that is probably not a great use of my time
  2. I could try to tweak the automation framework to help discover new issues instead of having it largely focused on checking for regressions.  This does sound interesting, but has some significant challenges which might require forking the system to a creating a separate framework for this type of testing
  3. I could focus on new automation initiatives that could replace or radically enhance the framework to where it is doing something totally new and different.  For example, I could try to push testing down, by replacing these tests with lower level unit tests. Or I could try to add in traceability so that we can automatically know which sets of test to run against which code changes.  I would love to do this, but my skill level isn’t high enough yet to do this on my own.  I could of course partner with developers on this, so it might still be an option
  4. I could create ways to measure the value of the tests that we have and/or new tests that we want to add.  A smooth running system can sometimes hide the real cost of things (because it’s just a half hour here and and a few minutes there) and and in the long run lead to more and more time spent on the boring work of maintaining the tests and system. We do try to remove tests that are not useful anymore, but it is hard to know which tests are in this category.  Having a way to measure the cost and value of a test would be helpful.  This a subjective and difficult thing to do, but the very exercise of thinking about a what make an automated test valuable and what the expenses are for automated tests would be useful in itself.

I think the way forward for me is to focus in on point 4.  I’m somewhat leery of metrics, but it will be useful to spend time thinking of heuristics that can indicate the cost and value of automated tests.  Hopefully by having these heuristics explicitly defined I will be able to use them to start conversations that will lead to more radical changes in the way we approach automation on our team.  

Generating Test Ideas

Testing is both easy and hard.  When I start testing a new build or feature, I often find 4 or 5 defects very quickly and without much effort (Whether it should happen like this or not is an article for another day).  However, once I have flushed out the easy to find bugs, I tend to get stuck and stop finding bugs, and once I stop finding bugs I get bored and move on.  This isn’t always a bad strategy as boredom can tell you that there isn’t anything interesting to find here, but sometimes others will come and look at something that I’ve tested and quickly find issues that I could have been able to find, or we will mark the story as done and move on, only to find some fundamental issues a few weeks or months later (or course, usually just as we are getting ready to release).  Often this issue will open my mind to whole new avenues of testing and I will end up finding several more issues based on the insights of that issue.

Many times finding defects earlier in the process saves a lot of time¹ and part of the point of testing is to find important defects early in process (at least before the customers find them) so I want to look at ways that I can get the inspiration to think of the product in new ways.  I have been reading through Elisabeth Hendrickson’s book Explore It! and have found some good insights in there that have been helpful.  However, there still is something more that I think I need to do.  

One of the things I have realized is that reporting on the status of a feature or story isn’t just something that is helpful for my manager or for the product owner, but it is something that is helpful for me.By having the structure and discipline of creating a test report, I think of things I might have forgotten (i.e. I am more strict about running through the checklist of things to consider).  As part of a test report I also try to give feedback on areas of risk, which means that the very act of preparing it makes me think about and consider what those areas are and helps to generate ideas for further risk.  It has has also helped me in communication with others as I follow up on information needed to fill out the report.  I continue to tweak the report template to make it present useful and timely information, but I will also be considering tweaks that will help make it a more useful tool for generating good test ideas.  One of the hardest things in testing is establishing a ‘done’ criterion and perhaps a report template is one of the tools that can help with this.

¹ I think