Boredom

Boredom

One of my favorite heuristics for when to automate something is if I’m bored.  If I’m doing the same thing over and over again, perhaps it is something I can or should automate. Recently though I was struck by the thought that perhaps boredom can be a heuristic for something else as well.

I was testing something in the UI and I was getting kind of bored, so my heuristic kicked in and I started thinking about what I could automate.  The work I was doing was kind of tricky to automate and after a while I started to feel like I might be wasting my time so I stopped to reflect on what I was doing.

I was trying to check various combinations of inputs in a particular area, but when I stopped to think about why I was doing it, I realized that I might be going down the wrong track.  The point of my testing was to make sure that the UI would work with these kinds of inputs.  Once the UI passed them on though it really had nothing to do with them, and the lower level engine would take over. I knew that we had tested the engine to make sure it properly dealt with various combinations of these inputs, so why was I doing that again here?  When I stopped to think about it, I realized that the risk the UI would treat one input differently than some of the other combinations was tiny and there was not much value in the kind of coverage I was trying to get.

In this case, my boredom didn’t point to the need to automate, it pointed to the need to stop what I was doing.  Sometimes when you are bored, you might be doing something useless.  If you are testing and you are entering the same thing 50 times with a slight variation each time, that is a prime candidate for automation.  Don’t instantly start automating though.  Stop and ask yourself: “if those changes are so small that they bore me like this, are they really significant enough for me to need to check each one?”  The answer might be yes, but the answer might just be that the risk it too low to make even the effort spent on automating it not worth it.

My new heuristic around boredom is that if you are feeling bored, take a step back and consider if there is something more valuable to do with your time.  Perhaps that means automating what you are doing, or perhaps that means stopping. Boredom means you are not engaging your mind, and as a tester that really isn’t the place you want to be in. When you are mindlessly typing, you probably aren’t adding a lot of value to your team. Boredom is  good reminder that we probably need to change something, just make sure you take a minute to make sure you are changing the right thing.

Changing a Culture

Changing a Culture

I recently read the book Hillbilly Elegy. It was a sad book in many ways as the author shared stories about the working class American ‘hillbilly’ culture that he had grown up in.  As with every culture there were good things, but there were also many damaging and destructive elements in that culture.  It has been said that this book helps you understand why many white collar Americans voted for President Trump, and I think that is probably true, but I don’t want to get into a political discussion here.  What I would like to talk about is how you might go about changing the damaging elements of a culture like this.

In the book J.D. Vance talks about how many of the issues that play out in their lives are systemic culture issues and how social assistance programs don’t address some of these culture issues – things like it being considered normal to scream and throw things at your spouse when you are arguing. Having more money and education doesn’t really fix these kinds of cultural things on their own, and the book is difficult to read in many ways because of this.  It is discouraging to think that many of our go to solutions as a society might not be as effective as we would hope – but there is light in this book as well. He also talks about what how he has seen change for some and it was always through relationships. He talks about how every person he knew that has been able to break free from the damaging parts of that culture, has been able to do so through relationships with those who have experiences with more healthy cultures.

Now, that was all a very long preamble to say that I think there is a general principle here that we can apply to testing as well.  Some of us work in corporate cultures with things that we want to see changed.  Things that we don’t like and see the usefulness of. Perhaps even things that are damaging for the company.  If we care, we want to see these things change.  However, sometime it seems like our go to solutions (we need management backing – or we need to educate people on this – or we need to nag people about this – or…), don’t seem to be working.

Maybe we need to take page from J.D. Vance’s book and focus on relationships.  It’s a longer, slower, sometimes discouraging path, but if we focus on building open relationships where we can model some of the things we are talking about, perhaps we will be able to be more effective at seeing lasting cultural change happen in our companies. Instead of trying to get management to buy into things so that we have authority behind us, or instead of trying educate people into the right way of thinking, perhaps we need to focus on strong relationships and change the culture one person at a time. Those other things can have their place and we can certainly take a multi pronged approach to changing a culture, but let’s not forget about one of the most powerful tools in our toolbox – sincere relationship building.

Asking Questions – A Tester’s Superpower

Asking Questions – A Tester’s Superpower

“Why are we planning to work on this feature?”

It had seemed like an innocent enough question, but there was a moment of silence followed by  “Umm, the objective owner thinks that it will help performance”

Another innocent question:  “Will it?”

Another silence

“Well, we’re not sure that it will. Maybe a little bit…”

We had recently made a similar change in another area of the code (where we knew that it would help the performance), and that change had been quite costly in terms of both development and testing time and had also introduced a limitation to that area.  I brought up these points and also asked if there was any way we could answer the question of how helpful it would be to the performance without investing days or even weeks of work.

This led to a discussion of which profiling tools we had available. We pulled someone from outside the team into this discussion and found out that we had licenses to tools our team didn’t know about.  Also, in the course of  that discussion he was asking why we wanted profiling tools and the limitation we had introduced earlier came up. He then said that his team could make some changes to help support what we needed so that we could remove that limitation.

After 30 minutes of discussion the planned work looked very different that it had before I asked my innocent questions.  Instead of going ahead with this change, we are instead going to be looking at cleaning up the previous limitation and profiling some parts of the finished code that are similar to the new code we would want to add, so that we can have a somewhat data driven answer to the question of how much this would help the performance.

One innocent question led to a lot of change. If you are a tester, asking questions is one of your superpowers.  Don’t be afraid to speak up if something is confusing – you never know how one question might change things!

Practical Steps to Increase Coding Productivity

Practical Steps to Increase Coding Productivity

In my last post I want onto my soapbox and shouted for while about how testers need to consider increasing coding productivity as one of their goals in the work they do.  It’s always fun to go on a rant for a while, and hopefully it was even helpful for some people, but I’ve jumped off of the soapbox now, and here on the ground things look a bit different.  I want to do this.  I want to increase coding productivity – but how?  What are some practical things I can do as a tester to increase the coding productivity of my team?

I want to share some of the ideas I have around this, but before I do that I need to insert a little caveat. These are ideas that I have for my team and my context.  They may or may not make sense for your team and your context.  I share them so that you can see some of my ideas and because ideas generate new ideas.  Maybe you can take some of my ideas and mix them with some of your ideas and produce some cute little idea babies.  But please don’t just blindly take what I’m talking about here and try to do the same thing in your context.

Ok that’s out of the way, so now what are some of the things I want to do to try and improve coding productivity on my team?

Fast Tests

I’ve mentioned this before and so won’t belabor the point here, but in general, faster tests equal more valuable tests.  I want to have fast tests in place.  In the short term this means that I’ll be going through the tests we have and shortening them where I can, but in the longer run this means pushing tests down to lower levels and running them on smaller units of code.  There are so many things we do with high level integration tests right now, that we ought to be able to do with code level, or unit test level tests instead

Fast Run Times

This might seem related to faster tests (and it is in some ways), but there are many other things that also go into fast run times.  We can run tests in parallel or on faster machines so that we can get through them more quickly.  We can also automate more of the steps.

There are two parts to what I am talking about here. One is reducing the wall clock time it takes to get through a set of test, which is a function of the speed of the tests and the ability to run them in parallel. However, another part of what I am talking about is the effort that it takes to get feedback from the tests. Right now we need to manually setup and run certain kinds of builds and then once we are notified of their completion we can go back and parse through the results.  If we were to automate some of those steps so that we could setup and run all tests and compare failures against a known ‘good’ build and then provide a local run queue with the test failures caused by this change – and if we were able to do this without any human intervention, we would be a long way down the path of having faster turnaround on the feedback.

The point of having fast runs times is so that the feedback can come in a timely manner. Having fast running tests doesn’t solve the problem if it takes a long time to set up the actual builds and debug the failures. I want to reduce the amount of time it takes from when a developer says ‘I want feedback on this change’ to the point were that feedback is actually available.

Relevant Feedback

Imagine you are making a change to the way error messages are handled in your app and you run some tests to check that nothing is broken. The run comes back with a number of tests that say that message X can’t be found in location Y.  The point of your changes was to have message X show up in location Z instead so this information isn’t really news to you as a developer.  It lets you know that indeed the messaging has moved away from location Y as you expected, but not much else.  This is an example of feedback that is not relevant.  What you really wanted to know was if there were any other messages unintentionally affected by this, or if there were still messages showing up in location Y.

Irrelevant feedback tends to be a big problem in tests and it comes in different forms. Sometimes it is like the example above, where we are told things we already know or where we don’t get answers to the questions we wanted to ask. But it can also come in more subtle forms as well.  Sometimes a passing test run is a form of irrelevant feedback. This might seem counter-intuitive, so let’s turn to another though experiment.  Imagine that you made a minor change and you run a full suite of tests on it.  However, in that suite only 1% of tests actually execute any code remotely related to your change.  The tests all pass, but how relevant is that feedback?  Think about all those tests you ran that had no chance of failing.  The fact that they passed is in no way relevant to you.  Of course they passed.  Nothing they checked was affected by your change.

So how do you make tests more relevant?  There are a couple of things I want to do with the tests I’m currently working with.  One is consolidation of things that are common.  So for example, if we have several tests checking a certain message we would consolidate those together so that if we change something related to that messaging we would just update in one spot (ahead of the test run) and be able to quickly and easily see if the changes occurred as expected.  Another change I would like to make is to leverage tools that will better show which areas of the codes particular tests exercise.  This way we can make more intelligent decisions about what tests to run for particular changes.  Getting this to an automated state will take us a quite a while, so in the meantime we are working on manually defining standard run queues that are more specifically targeted to certain kinds of changes.  Having a better understanding of which tests map to which areas of the code will help us get more relevant results from our tests runs.

I could go on with this, but it seems I’ve been quite long winded already, so I’ll draw it to a conclusion here.  I have a lot of ideas on how to improve my tests automation.  Maybe that is because there are a lot of problems with my system, or maybe some of these tweaks are the kinds of things you can change on your system as well.  Hopefully this article has given you some ideas for changes you can make to help your automation increase your coding productivity.

 

Improve Coding Productivity

Improve Coding Productivity

In my last post I talked about coding productivity being one of the considerations for deciding what automation to keep.  I want to dig into this a little more as I’m not sure that it’s something we as testers think about a lot.  We think about things like making sure we don’t let bad builds get out or about “finding all the bugs.” We think about making sure we have good coverage or large automated regression testing suites.  We think about a lot of different things, but in many cases it seems that the stuff we think about is related to ‘test only’ activities.

But aren’t we supposed to be system’s thinkers?  Aren’t we supposed to looking at things from a holistic point of view?  Think about it.  Someone needs to do that right? In most companies testers are hired to help mitigate risk.1 Companies pay good money to testers because they want some assurances that the code being released is going to be valuable to their customers.  However, this isn’t something testers can do on their own. Creating valuable code is a team based activity.  Are we thinking about things in a holistic team based way?

One of the most important things we as testers can do is to help improve coding productivity.  I repeat myself here: the reality is, creating valuable software is a team activity.  Testers need to think about how their actions impact and influence the team.  If we spend too much time find bugs and no time thinking about how to work with the team to introduce less bugs, are we really helping the team?  If we think about getting good coverage of the features and not about how we can figure out what kinds of issues and interactions our customers actually use, are we really helping the team?  If we think about preventing bad builds from getting through the pipeline and not about about how to change our build and deployment process so that the builds don’t get mangled in the first place, are we really helping the team?

We need to think big picture.  Don’t get so caught up in the official tester activities that you forget to look at what is going on. Where are the pain points in getting the code out the door?  What things can you do to help make developers more productive?  How can we as testers reduce the time it takes to get good quality code shipped?

Hint: it probably isn’t by running more regression tests.

Look for things that are slowing down the release of code. There are many things that need fixing.  Some of them you can do something about and some of them you can’t.  Find one that you can and start chipping away at it.  Don’t get so caught up on the hamster wheel of running and maintaining tests and flinging bugs over the cubicle wall that you forget to come up for air and look around. There are pearls to be found, but if the current has pulled you away from the reef, you might need to spend some time swimming back into position. Be a systems thinker.  Zoom out a bit and tackle the problems that are slowing down the system.

Footnotes

1. Whether you agree with this being what testing is about or not, I think the reality is that for many companies this why they hire testers.

ROI of Less Automation

ROI of Less Automation

I want to reduce the size of my automated regression testing suite.  It takes too long to run and it takes too much work to maintain.  It’s a big clunky beast and I want to reduce the size of it.  The problem is I don’t want to just arbitrarily delete tests.  I want to reduce the size in an intelligent way since there is still value to be had in some of the checks we have in there.  To do this requires work – a fair bit of work at that, and since we are paying people to do that work, it requires money to do this.

So how do I know if it is worth it?  What is the return on investment the business will get out of the work it will take to reduce the size of my automated test suite?

It seems like we don’t often ask the question from this angle.  Go google search queries related to the ROI of test automation and almost everything that comes up will be about how you can justify adding tests and ways to calculate the return on investment your business will get from adding test automation.  However, the reality is that not all tests are created equal.  Some tests do give you a positive return on your investment and some tests give a negative return.  In addition, the return vector can change over time. A test that was very valuable when it was made, might now be costing you more than it gives you.  To satisfy yourself on the fact that not all tests are equal, simply turn back to our good friend Google and ask him (her? it? –  I dunno) about test automation problems. The millions of hits you get speak to the fact that there are many ways for test automation to not add value.

We have a disconnect here.  Many people talk about the ROI of using test automation and many also talk about the how to deal with problematic tests, but are we connecting these two together?  How can we figure out (even in just a rough way) when there is a positive return on removing test automation?

Well let’s talk about the return side of things.  What benefits could your company see from having less automation?

Reduced Test Maintenance

Anyone who has worked in test automation will be able to pretty quickly tell you one of the main benefits – less test maintenance time.  Having less test to run, means less time spent on figuring out failures and updating or changing tests. This seems pretty obvious, but let’s not skip over this point too quickly.  There is more to test maintenance than just the time spent reviewing tests.  The more tests you have, the more machines you need to run those tests on.  Those machines also need to be maintained.  You need to apply patches and reboot them after power failures, or you need to buy licenses and time in the cloud. You also need some way to run those tests, which probably means some kind of build system like TeamCity or Jenkins.  More tests and more machines means more complexity in your build system as well. Those setups and script need maintenance too.

Most of us already know that test maintenance can be pretty expensive, but have you ever stopped think about how much it really cost you?  You might be surprised!

Reduced Machine costs

Machine cost have been driving downwards since computers made their debut and so we might think this one doesn’t matter as much in today’s world, but once again let’s not skip too quickly over this one.  There are a number of factors that play into machine costs.  In addition to the hardware purchases, we have the ongoing electricity costs of running that machine and the licensing costs to have an operating system on that machine.  We also often have other more hidden costs like insurance and the time spent setting up the machines in the first place.  Things like installing the OS and virus scanners, and setting up the machine on the proper domain controllers, etc. etc.  All of these costs add up and mean that there are significant savings to the company for each machine we don’t have to have for running automation.

Increased Coding Productivity

This one requires a little more nuance and thought, but I think there is an argument to be made that in many cases reducing your test automation will make developers become more effective.  This is the opposite of most of the sales pitches you’ll hear about automation, but hey you should probably use your critical thinking skills when it comes to sales pitches anyways.

Snarky comments about sales pitches aside, this probably isn’t as obvious as some of the other things I’ve talked about so I want to dig into it a bit more. Let’s say you have a test suite that takes a long time (definition:  more than 1 hour) to run. Let’s also say it needs to run before code gets merged into the release branch.

At this point we can think though a simple workflow: a coder submits code for merging and the test suite starts running.  An hour later, the build fails for some reason. The coder takes a look at it and it ends up being due to some missing edge case that a test caught.  She makes the fix and re-submits the build and things get in.

This is exactly why we run automated tests right?  To catch those sneaky issues. However, if we extend this thought experiment a bit more we can see where things start to go wrong.  What if you add some more tests that add a couple more minutes to that test run?  And now some more.  And some more.  Keep going with that thought experiment until we have it taking 5 hours to run the test suite.  Now when the developer runs the tests they find the same issue, but she doesn’t find out about it until the next day because the run took so long.  Does waiting this long help or hinder the coder’s productivity?

Of course there are many ways that we try to mitigate this kind of problem (parallel test runs, smoke tests vs. daily tests etc.), but at the root of it all is trying to solve this problem: increasing the run time of a test suite slows down developer productivity.  The converse of this is that by reducing the run time of your test suite, you might just be increasing developer productivity.

So how is your test suite?  Is it worth investing in making it smaller?  Could you help the business by running less test automation?  Count the costs of your test automation – it just might surprise you how expensive it really it!