It has been said that a good heuristic for when to automate it when you are bored. So then, I have a question: what do you do when your automation bores you? I’ve spent years working on and tweaking our automated regression system to that point that it runs very smoothly. There are occasional glitches and breakages, but in general it is a very reliable system. It fails when there are problems (most of the time) and it passes when things are ok (most of the time). Adding new tests is easy. Sometimes I create one off tests that are not intended to be re-run directly in the automation framework just because of the useful debug and analysis tools we have built into our automation framework.
There is very little interesting work to do anymore, so what do we do? Do we keep adding tests to the system and let ourselves get into a comfortable rut? Add tests. Occasionally remove an out of date test. Clean up some test scripts for a development change. Debug a failing test. Rinse and repeat. Sounds easy and comfortable, but it is also boring. Where is the challenge? If boredom is good heuristic for automating, maybe I need to automate some of this. There are a couple of ways I can think of to automate the automation:
- I could increase/improve the workflows around automatically updating test script cleanup or debugging. However, most of the low hanging fruit has already been done in this area, so that is probably not a great use of my time
- I could try to tweak the automation framework to help discover new issues instead of having it largely focused on checking for regressions. This does sound interesting, but has some significant challenges which might require forking the system to a creating a separate framework for this type of testing
- I could focus on new automation initiatives that could replace or radically enhance the framework to where it is doing something totally new and different. For example, I could try to push testing down, by replacing these tests with lower level unit tests. Or I could try to add in traceability so that we can automatically know which sets of test to run against which code changes. I would love to do this, but my skill level isn’t high enough yet to do this on my own. I could of course partner with developers on this, so it might still be an option
- I could create ways to measure the value of the tests that we have and/or new tests that we want to add. A smooth running system can sometimes hide the real cost of things (because it’s just a half hour here and and a few minutes there) and and in the long run lead to more and more time spent on the boring work of maintaining the tests and system. We do try to remove tests that are not useful anymore, but it is hard to know which tests are in this category. Having a way to measure the cost and value of a test would be helpful. This a subjective and difficult thing to do, but the very exercise of thinking about a what make an automated test valuable and what the expenses are for automated tests would be useful in itself.
I think the way forward for me is to focus in on point 4. I’m somewhat leery of metrics, but it will be useful to spend time thinking of heuristics that can indicate the cost and value of automated tests. Hopefully by having these heuristics explicitly defined I will be able to use them to start conversations that will lead to more radical changes in the way we approach automation on our team.