It has been said that a good heuristic for when to automate it when you are bored. So then, I have a question: what do you do when your automation bores you? I’ve spent years working on and tweaking our automated regression system to that point that it runs very smoothly. There are occasional glitches and breakages, but in general it is a very reliable system. It fails when there are problems (most of the time) and it passes when things are ok (most of the time). Adding new tests is easy. Sometimes I create one off tests that are not intended to be re-run directly in the automation framework just because of the useful debug and analysis tools we have built into our automation framework.
There is very little interesting work to do anymore, so what do we do? Do we keep adding tests to the system and let ourselves get into a comfortable rut? Add tests. Occasionally remove an out of date test. Clean up some test scripts for a development change. Debug a failing test. Rinse and repeat. Sounds easy and comfortable, but it is also boring. Where is the challenge? If boredom is good heuristic for automating, maybe I need to automate some of this. There are a couple of ways I can think of to automate the automation:
- I could increase/improve the workflows around automatically updating test script cleanup or debugging. However, most of the low hanging fruit has already been done in this area, so that is probably not a great use of my time
- I could try to tweak the automation framework to help discover new issues instead of having it largely focused on checking for regressions. This does sound interesting, but has some significant challenges which might require forking the system to a creating a separate framework for this type of testing
- I could focus on new automation initiatives that could replace or radically enhance the framework to where it is doing something totally new and different. For example, I could try to push testing down, by replacing these tests with lower level unit tests. Or I could try to add in traceability so that we can automatically know which sets of test to run against which code changes. I would love to do this, but my skill level isn’t high enough yet to do this on my own. I could of course partner with developers on this, so it might still be an option
- I could create ways to measure the value of the tests that we have and/or new tests that we want to add. A smooth running system can sometimes hide the real cost of things (because it’s just a half hour here and and a few minutes there) and and in the long run lead to more and more time spent on the boring work of maintaining the tests and system. We do try to remove tests that are not useful anymore, but it is hard to know which tests are in this category. Having a way to measure the cost and value of a test would be helpful. This a subjective and difficult thing to do, but the very exercise of thinking about a what make an automated test valuable and what the expenses are for automated tests would be useful in itself.
I think the way forward for me is to focus in on point 4. I’m somewhat leery of metrics, but it will be useful to spend time thinking of heuristics that can indicate the cost and value of automated tests. Hopefully by having these heuristics explicitly defined I will be able to use them to start conversations that will lead to more radical changes in the way we approach automation on our team.
Testing is both easy and hard. When I start testing a new build or feature, I often find 4 or 5 defects very quickly and without much effort (Whether it should happen like this or not is an article for another day). However, once I have flushed out the easy to find bugs, I tend to get stuck and stop finding bugs, and once I stop finding bugs I get bored and move on. This isn’t always a bad strategy as boredom can tell you that there isn’t anything interesting to find here, but sometimes others will come and look at something that I’ve tested and quickly find issues that I could have been able to find, or we will mark the story as done and move on, only to find some fundamental issues a few weeks or months later (or course, usually just as we are getting ready to release). Often this issue will open my mind to whole new avenues of testing and I will end up finding several more issues based on the insights of that issue.
Many times finding defects earlier in the process saves a lot of time¹ and part of the point of testing is to find important defects early in process (at least before the customers find them) so I want to look at ways that I can get the inspiration to think of the product in new ways. I have been reading through Elisabeth Hendrickson’s book Explore It! and have found some good insights in there that have been helpful. However, there still is something more that I think I need to do.
One of the things I have realized is that reporting on the status of a feature or story isn’t just something that is helpful for my manager or for the product owner, but it is something that is helpful for me.By having the structure and discipline of creating a test report, I think of things I might have forgotten (i.e. I am more strict about running through the checklist of things to consider). As part of a test report I also try to give feedback on areas of risk, which means that the very act of preparing it makes me think about and consider what those areas are and helps to generate ideas for further risk. It has has also helped me in communication with others as I follow up on information needed to fill out the report. I continue to tweak the report template to make it present useful and timely information, but I will also be considering tweaks that will help make it a more useful tool for generating good test ideas. One of the hardest things in testing is establishing a ‘done’ criterion and perhaps a report template is one of the tools that can help with this.
¹ I think
What is testing? There are almost as many definitions of that as there are software testers and I don’t think it can be contained in just one definition (at least not succinct one). However, today I’m going to put forward a definition that helps to point out and emphasize some of what testing is. This is not intended to be a definitive definition of testing, but is merely something that I’ve been thinking about recently.
Testing is figuring out how to align our judgement with the judgement of the customer and advocating for the product to change in ways customers judge as valuable
I think much of what we do as testers boils down to this. We are trying to figure out what customers will want and care about and trying to find any areas in the product where it doesn’t match these expectations. One of the hardest parts of our jobs is figuring out how to do this. How do we know what the mythical ‘customer’ wants? What do we do when two different customers want conflicting things? How do we distinguish between what I want and what a customer would want? How do we figure out the importance of the things customers don’t even know that they want or those things that have an indirect impact on customers (clean code, or testability features etc.)?
I’ve just raised a lot of questions that I have no intention of answering in the blog post, but come to think of it, these same things are also one of the hardest parts of a coder’s job as well. Perhaps test and dev really are in the same game. Maybe as testers a big part of our job is to take what we have learned about what customers want (and the ways we have managed to figure that out) and share that with developers so that they can better write code that meets customer needs the first time around. Maybe being a ‘tester’ or a ‘developer’ isn’t what is important so much as being part of team that can produce software that meets the customer’s needs.
Maybe we don’t need to sweat too much about what testing is, but should instead focus on what effective software development is. If effective software development efficiently produces software that is valuable to the customers, where does testing fit into there? I guess it depends on your team and how you work and what your skills are.
For me I think it means learning how to pass on much of what I have learned to the developers. I have skills that would help my team become more efficient, but sometime those skills are wasted on things like finding bugs. Finding bugs is helpful of course, but wouldn’t it be far better to work with developers on preventing those bugs? When I start testing a new feature and find 5 serious bugs in the first hour of testing, is that because I am skilled tester? – it may be, but does this use my skills in a way that maximizes their usefulness to the team? Could I have helped developers gain a perspective on the feature such that 4 of those bugs were fixed before the developer marked the coding as done?
Food for my thoughts. Let me know yours.
There are so many good blogs out there on software testing so why start another one? What value do I think I can really add here that isn’t covered by others? Why bother with this?
I used to blog, back before Facebook and Twitter. In those days it was what we used as social media. Things have changed (of course! – we are talking about software here) and I have changed as well. I’ve been working as a software tester for 8 years now and hopefully I have learned a few things along the way. However, I am also discovering that I’m getting to the point in my career where the learning curve has slowed down. The easy stuff has been covered. What is left now is the stuff that takes a lot more work and effort to learn. This is the stuff that really stretches me beyond my natural talents and that requires digging deep on different topics.
In highschool I was homeschooled and this meant that when I was stuck on something I had to figure out my own way out of it. Sometimes this meant finding someone who could help me (for example, finding a family friend who was an engineer and could help me with my algebra) but one of the most useful skills I found was to explain it to my younger siblings. I learned very early that one of the best ways to learn something difficult is to explain it to others. Doing this revealed the flaws in my own thinking and showed me where I wasn’t fully understanding things.
I think it is time for me to get back into that mode. It’s time to start explaining things to others so that I can come to clarity for myself. It’s time to put into words what I am trying to do so I can see where I’m missing things and where I don’t understand concepts that are crucial for me to get if I want to continue to advance in my abilities as a tester. I’ve been reading and taking in a lot of information on how to be effective at software testing in the past few years, and now it is time to see how well I really know it.
So come on and join me. If you’re confused, let me know. If you disagree, let me know. If you like it, let me know. Maybe we can learn a few things together!