In an earlier post I wrote about the importance of learning from your automation and in that post I mentioned some of the tools that I have to help me learn from my automation. I was asked on about these in twitter and while it will be tough to answer in a general way since we use a custom test automation platform, I thought it would be worth trying to explain.
— Katrina Clokie (@katrina_tester) April 26, 2017
This post will probably be a slightly more technical than most of my posts are but hopefully it will be helpful even to those who are less technically minded (I won’t be posting code here).
In this post I want to talk about my test variation tool. How it works and how I use it to give me new insights. To do that let’s start at the begging with our test automation framework. The framework was custom built for the product we were using, but when we were writing it we didn’t want it to be too tightly coupled to the actual product. There were a couple of reasons for this, not least of which was a desire to not have to modify the testing framework when making changes to the product code. As a result the framework was built in a layered approach that ended being very helpful in many ways. The testing framework itself merely required you to specify a config file for each test with a few items in it (Test description, keywords etc.). The config file then needed to define a Run() function (using python syntax) which could call any arbitrary code as long as it returned back certain status codes (pass/fail/timeout etc.) and messages.
This meant that the actual work of running the product under test and pointing to the run scripts used etc. was done in a set of ‘helper’ functions that we could import and use in any given test. This gave a high degree of customization to the tests and allowed anyone to easily write their own additional functions to use in any particular set of tests.
You can probably figure out by now how I managed to make my test variation tool work. I wrote a function to modify the test scripts which could be used in any of the tests. This function would find the scripts that were going to be run as part of that test. It would also look for a predefined file that contained the commands we wanted to add. It would then parse through the test scripts and modify them to add the requested commands at a point in the scripts immediately before we asked the engine to solve. After that, control would be given back to the functions that were used to startup and run the system under test, but now instead of running the original scripts we would be running modified copies of the scripts.
This allowed us to do a lot of interesting things. For example we could force every test in the system to use one particular option. This would let us see how the option would work in a wide range of settings and we could find potential feature combination issues in any tests that crashed. We could also use it to evaluate what might happen if we changed the default on an option. We could just force that option to use the new default in all the tests and see what happened. In many ways this allowed us to better explore our product and in fact when the developers saw some of the cool things that we could do they started to pull some of the ideas into the development code itself making it even easier to experiment with these things.
Now, my particular framework – and the fact that I had complete access to it since I was one of the people who wrote it – made it pretty easy for me to do something like this, but can we do this in general? I’ve moved to another team now so I guess I’ll be able to find out how easy it is to generalize this, but I’ll close with a few thoughts on how you might be able to implement something like this. If you don’t have a nicely modular testing framework you could still do something like this fairly easily. The nice part about what I did was that I could integrate the test mutations right into the run itself and I only needed to define commands in a particular file and then turn on a flag to tell the tests to use the test variation tool. But the modifications could easily be done in a way external to the test system itself. You could write a script that would traverse your tests and modify your test scripts before you even start your test run. This might be slightly less convenient but in theory it should be pretty straightforward to do.
If I end up doing something like this on my new team I’ll post the results of that here as well, but in the meantime maybe you can give it a try with your tests. Who knows, your automation might just be able to teach you something!