In my last post I want onto my soapbox and shouted for while about how testers need to consider increasing coding productivity as one of their goals in the work they do. It’s always fun to go on a rant for a while, and hopefully it was even helpful for some people, but I’ve jumped off of the soapbox now, and here on the ground things look a bit different. I want to do this. I want to increase coding productivity – but how? What are some practical things I can do as a tester to increase the coding productivity of my team?
I want to share some of the ideas I have around this, but before I do that I need to insert a little caveat. These are ideas that I have for my team and my context. They may or may not make sense for your team and your context. I share them so that you can see some of my ideas and because ideas generate new ideas. Maybe you can take some of my ideas and mix them with some of your ideas and produce some cute little idea babies. But please don’t just blindly take what I’m talking about here and try to do the same thing in your context.
Ok that’s out of the way, so now what are some of the things I want to do to try and improve coding productivity on my team?
I’ve mentioned this before and so won’t belabor the point here, but in general, faster tests equal more valuable tests. I want to have fast tests in place. In the short term this means that I’ll be going through the tests we have and shortening them where I can, but in the longer run this means pushing tests down to lower levels and running them on smaller units of code. There are so many things we do with high level integration tests right now, that we ought to be able to do with code level, or unit test level tests instead
Fast Run Times
This might seem related to faster tests (and it is in some ways), but there are many other things that also go into fast run times. We can run tests in parallel or on faster machines so that we can get through them more quickly. We can also automate more of the steps.
There are two parts to what I am talking about here. One is reducing the wall clock time it takes to get through a set of test, which is a function of the speed of the tests and the ability to run them in parallel. However, another part of what I am talking about is the effort that it takes to get feedback from the tests. Right now we need to manually setup and run certain kinds of builds and then once we are notified of their completion we can go back and parse through the results. If we were to automate some of those steps so that we could setup and run all tests and compare failures against a known ‘good’ build and then provide a local run queue with the test failures caused by this change – and if we were able to do this without any human intervention, we would be a long way down the path of having faster turnaround on the feedback.
The point of having fast runs times is so that the feedback can come in a timely manner. Having fast running tests doesn’t solve the problem if it takes a long time to set up the actual builds and debug the failures. I want to reduce the amount of time it takes from when a developer says ‘I want feedback on this change’ to the point were that feedback is actually available.
Imagine you are making a change to the way error messages are handled in your app and you run some tests to check that nothing is broken. The run comes back with a number of tests that say that message X can’t be found in location Y. The point of your changes was to have message X show up in location Z instead so this information isn’t really news to you as a developer. It lets you know that indeed the messaging has moved away from location Y as you expected, but not much else. This is an example of feedback that is not relevant. What you really wanted to know was if there were any other messages unintentionally affected by this, or if there were still messages showing up in location Y.
Irrelevant feedback tends to be a big problem in tests and it comes in different forms. Sometimes it is like the example above, where we are told things we already know or where we don’t get answers to the questions we wanted to ask. But it can also come in more subtle forms as well. Sometimes a passing test run is a form of irrelevant feedback. This might seem counter-intuitive, so let’s turn to another though experiment. Imagine that you made a minor change and you run a full suite of tests on it. However, in that suite only 1% of tests actually execute any code remotely related to your change. The tests all pass, but how relevant is that feedback? Think about all those tests you ran that had no chance of failing. The fact that they passed is in no way relevant to you. Of course they passed. Nothing they checked was affected by your change.
So how do you make tests more relevant? There are a couple of things I want to do with the tests I’m currently working with. One is consolidation of things that are common. So for example, if we have several tests checking a certain message we would consolidate those together so that if we change something related to that messaging we would just update in one spot (ahead of the test run) and be able to quickly and easily see if the changes occurred as expected. Another change I would like to make is to leverage tools that will better show which areas of the codes particular tests exercise. This way we can make more intelligent decisions about what tests to run for particular changes. Getting this to an automated state will take us a quite a while, so in the meantime we are working on manually defining standard run queues that are more specifically targeted to certain kinds of changes. Having a better understanding of which tests map to which areas of the code will help us get more relevant results from our tests runs.
I could go on with this, but it seems I’ve been quite long winded already, so I’ll draw it to a conclusion here. I have a lot of ideas on how to improve my tests automation. Maybe that is because there are a lot of problems with my system, or maybe some of these tweaks are the kinds of things you can change on your system as well. Hopefully this article has given you some ideas for changes you can make to help your automation increase your coding productivity.