How to Get Rid of Tests (Smartly)

How to Get Rid of Tests (Smartly)

I wasn’t feeling very well.

It had seemed like a good idea at the time, but now I was regretting it. That morning I had woken up early while everyone else in the family was still asleep and quietly crept down the stairs and headed to the kitchen.  I opened the refrigerator door and there it was: a full carton of eggnog.  I pulled it out of the fridge, pushed a chair over to the cupboard, climbed up and found a cup, and poured myself a glass of delicious, creamy eggnog. After the glass was finished I had another, and then another until the entire carton was gone. It had been so delicious, but now my stomach was telling me I shouldn’t have done it. I will spare you the details, but that day I quickly came to realize that you can have too much of a good thing.

Kind of like automated tests.

They are a good thing.  They are very helpful, but have you even been in a situation where you just have too much of a good thing?  What happens when you have hundreds or even thousands of end-to-end or integration tests?

You start to realize that automated tests aren’t free.

Too Many Tests

How much time are we spending on these automated tests? Well lets add it up.  We spend time fixing tests that fail for expected reasons.  We spend time debugging intermittent failures. We spend time patching and updating VMs.  We spend time creating and maintaining test builds. We spend time figuring out why things have broken.  We spend time dealing with requisitions to get new machines in place.  We spend money on buying those machines and operating them.

There are a lot of costs that go into tests and the above list in certainly not exhaustive.  A lot of those costs can be hidden, but when you have too much of a good thing you can’t ignore them anymore. Your team is overloaded with work and it forces you to realize that automated tests are not free.

So what can you do about it?

You drank the cool aid eggnog, and now you have too many tests.  How do you go about fixing the problem? Those tests were initially added for a reason – somebody at some point thought it would be a good idea to add each of them – so how do we reduce the number of tests? We don’t want to just arbitrarily delete tests, but we are already overloaded so we don’t really have time to go through and carefully evaluate them to figure out which ones to get rid of.

What we are face to face with here is something known as ‘the real world.’  There may be some perfect level of automation coverage and there may be some way to get there, but the reality is you can’t get to perfection.  You just need to get better. Tackle the problem one bite at a time.

Find the most expensive tests

We outlined above some of the expenses that go into tests. All you need to do is find out which tests are the most expensive.  Easy right? Well maybe not, but there are some strategies that can help you focus your attention when cleaning up tests:

Long running tests. That test that runs for 20 minutes – It’s expensive. The time it takes to run and debug that test is high.  One really quick way to pick which tests to clean up or remove is to sort your tests by run time

Flaky tests.  That tests that fails off and on for no reason – also expensive.  You need to look into the failure again and again and you start to learn to ignore it.  Do you really need that test?  Can you isolate some of what it is doing so that you only check the thing you are interested in?  This is another example of tests that are easy to identify (just look through the test reports for how often tests pass or fail) and that would be high value tests to cleanup or remove.

High maintenance tests. The test that always needs to be updated is another class of expensive tests.  These can also be reasonably easy to detect if you version control your tests. Just look through the version control logs and see which tests are changed the most.  Could you remove some of these, or change what they are checking so they don’t fail for things you don’t care about? Or could you perhaps check the things these tests are looking at more easily with a quick manual check of the product (gasp!). Not everything needs to be automated.

There are more strategies that you can use to help you prioritize your test cleanup activities, but we’ll stop here for now. What about you? Do you have too many tests?  Have you started to realize that you need to do something about it?  What strategies do you use to approach test cleanup?

 

 

 

 

Advertisements

The Need for Creativity

The Need for Creativity

The other day I was watching a show on TV about how food is made and I started thinking about software testing (because that’s what normal people do right)?

So much of what we have in our society is made on assembly lines and food is no exception.  That candy you bought?  Produced in an assembly line style environment.  The loaf of bread and container of milk?  Same thing.  We still have artisanal food, but the reality is that most of what he eat is at least partially made in a factory somewhere.

How do you find meaning in what you do in the assembly line world?  How do you find meaning in just being a step in a process. Do you actually feel like you have made something when at the end of the day, you’ve rolled out 240 candy canes per hour? I think that we as humans have an innate desire to create, but does a process like this allow you to create? Can we find meaning and fulfillment in a job like this? Can we meet our need to create things when most of what we build as a society requires the input of many different people?

I think we need to think about these questions, and not just in the assembly line context. These kinds of questions matter in the software world as well. It isn’t just physical goods that are too complex for most of us to make on our own.  The software systems we build require teams as well.  We may call this creative work, but very few of us create entire products. We work on one small piece that fits in with other pieces. Can we fulfill our need to create in this context?

What about those of us who’s primary role on the team is to test? What do we create?  How do we scratch that creative itch?

I think this is why a common complaint among testers is that they get left out of decisions about what the product will do or look like. It’s why we talk about the need to shift left. It isn’t just because those things are helpful (although they usually are). It’s because we have a need to create something and to be included in the creative process. If all we are doing is checking someone else’s work to see if they did it correctly, we have no job fulfillment.

Before I started in the testing job I have now, I worked in a couple of QA jobs in factories. I hated them. Take this batch of paint. Measure it for various properties.  Compare to the specification table the customer had provided and accept or reject. Boring work with nothing to point to. At least if I worked at a factory that made candy canes, I could look at a candy cane in the store and be proud of the fact that I had twisted two colors together perfectly. If I looked at a can of paint, all I could say was that I had told everyone they did their job correctly. Not only was the job boring, it felt like it wasn’t tied to something I was creating.

Now back to software testing. If my job as a tester is to compare the work someone is doing to a specification and accept or reject it, I’m not going to feel like I am producing something of value. We want to be involved in making something. We want to be able to point to the system proudly and say “I helped make that!” The human need for creativity is why we testers want to be involved in more than just checking if the product does x or y, but – and this is important – it is also why your team needs a great tester. All members of the team want to be able to point to the software and be proud of the fact that we made something great.  If we can’t do that we will be demoralized as a team and we aren’t going to produce a lot of value in the long run. Testers help with this! Value your testers. Listen to them. They share the same desires you do.  The desire to make something we can all be proud of. Let’s do this together!

 

What Hume Can Teach us About Automation.

What Hume Can Teach us About Automation.

David Hume’s Guillotine tells us that we cannot derive an ought from an is.

Abstract philosophical musings you might think, but then again, you work in software, so you know that philosophy matters, right? I want to muse on this for a while, and not in an abstract realm.  I want to take this down into the real world.  I’m going to take the law at face value. If it really is true that an ought cannot be derived from an is, what should we do differently?

“How can I automate this?”

“Should I automate this?”

Two similar sounding questions, but which one you start with makes a world of difference.  How often do we start with the is? We start with the fact that we have a framework in front of us that let’s us easily add tests, or the fact that we are told to automate everything or even the ‘fact’ that manual testing is too slow or too cumbersome for the modern software shop.  But aren’t we forgetting something?  Aren’t we forgetting the law our friend Hume told us about?  Just because we can do something doesn’t mean that we should.

This applies at every level of software development (because what is software development except automation?)

Should you make this product?

A number of years back the company I work at started a new initiative. We had acquired two previously competing products and after the acquisitions we decided to try and combine the best of both products and re-architect everything into a new product. It seemed like a great idea and we put a lot of effort into it for several years, but recently the product was canceled.  Why?  Because we had decided that this product did not need to be made.  Some people were interested in it, but market analysis and other factors indicated that it was not worth further effort. We might have been right or wrong about the market forces at the play and the strategy employed, but this does illustrate something.  Could we make this product? Sure.  Should we?  Not if it won’t help people. Not if it will lose us money.

Just because you can do something doesn’t mean you should.

Should you automate this?

What about an example that is a little more related to test automation?

At one point I worked with a team that did workflow tests.  These were scripts that we would run through every morning to check if the nightly builds were ok.  As you can imagine they were boring to do and so we decided to automate them. Makes sense right? Automate away the boring stuff. We don’t want to spend half an hour every morning working through the same boring scripts. So we went ahead and did it. We automated them, but should we have?

I’ve talked before about how boredom can be an indicator that you should automate something, but that it can also be an indicator that you are doing something useless.  So which is it here?  I think a bit of both actually.  We automated them all at that time, but since then we have removed or changed many of them.  There are still a few that have stayed around because they are valuable, but we certainly didn’t need as many as we had.

This once again illustrates the point that just because you can automate something doesn’t mean you should.  If we start from the is – is this possible?  Does a framework exist that allows me to do this? – we never get to the ought –  does this make sense?  Will this be valuable?  If we had started with ought questions we would have just gotten rid of half the scripts we had and automated the few that were actually valuable. Instead we started with the is questions and ended up spending a lot of time creating and maintaining tests that we really didn’t need.

We need to start with the ought.  In the software world we (understandably) love using software to do things.  Sometimes that is the wrong answer.  Sometimes there is a non-software solution to something. Sometimes we are just doing the wrong thing and so setting up systems to do it faster isn’t helping anyone.

Start with ought.  Know why you do what you do.  Take a minute to think things through before you plunge in and start doing.  If you know the ought, figuring out what to actually do becomes much easier

Book Review – Systemantics

Book Review – Systemantics

I recently read through Systemantics by John Gall (or the Systems Bible as the newer editions are called). Although presented in a very humorous and entertaining way, this book is packed with ideas that make you stop and think.

Why don’t things work the way you expect them to? Well, this book will tell you. It might seem discouraging to know that a “Complex System cannot be ‘made’ to work. It either work’s or it doesn’t,” but when you think it about, it is easier to (principle 31) “align your system with human motivational vectors,” than it is to keep banging your head against fundamental systems laws.

And never forget that “systems will display antics.” Don’t be surprised when the system doesn’t do what it is designed to do.

This book might make you a little cynical, but you’ll probably get further ahead in the world if you understand why you are feeling frustrated by the systems you are in. Sometimes a does of realism is good for us right? I would highly recommend this book to anyone who works with systems (i.e. all of us).

Writing Automation is Easy

Writing Automation is Easy

“It’s easy to write automation”

As the thought jumped through my mind, I had to tip my chair back, look up at the ceiling and ponder.  Is that true?  Is it easy to write automation? As I stared at the grey painted ceiling and chewed on the thought, some examples came to mind that illustrated the truth of this.  It really is easy to write automation.

I had put together a new test in a half an hour and even while I was doing it, I was thinking about how much of the test creation work could be automated.  With a simple template and a few shared functions we should be able to make it even easier to create an automated test.  Creating a test is easy

I had a new co-op student and after a bit of training on our automation system he was able to add several new tests in a day.  Creating a test is easy.

We have thousands of tests in our automation system.  How did we get so many? Well, creating tests is easy.

The Struggle

But, if it is so easy, why do so many people struggle with test automation?  Why is it an ever popular subject of books, blogs and conference talks?  Why do we spend so much time and effort on this.

Well, the answer is simple.  Test automation is really hard.

Wait, what?  What kind of weird world do I live in.  Didn’t I just say it was easy?  Well, not quite.  Adding a new test is pretty straightforward.  Adding a test that is going to be both valuable and low cost – that’s a whole different story.  Adding tests is easy.  Adding good tests?  Not so much.

Remember that test I added in a half an hour.  Easy right?  Well, I ended up making several major edits and changes to it, based on review feedback.  Why?  because adding a test is easy, but adding a good test is hard.

Those tests the co-op student put in?  We spent a lot of time together on figuring out what kinds of things to check in these and how to make them work well.  Adding the tests was easy, making them good tests was hard.

Our automation system does indeed have thousands of tests and we are struggling with how to keep up with them.  How do we change and tweak them so that they are more valuable?  Once again, adding tests is easy, but adding good tests is hard.

 

How do we fix automation problems?  Do we need to hire more automation engineers? Do we need a new framework? Do we need to get more developers involved?  Maybe. But there is one thing that we really need.  We need skill. Changing tools and processes can help, but to get good automation you need skilled automators.

We need automation craftsmen and women.  We need people who know and understand how to write high quality automation.  People who know how to use automation to actually lower the costs of  testing.  People who know how to write the kinds of tests that actually add value.

You want to stand out from the crowd?  Don’t just write automation – write good automation. Study automation.  Learn how to leverage it.  Learn how to clean up automation messes. Learn how to move automated tests from mediocre to great. Figure out what makes for good automation and then find the tools you need to help you achieve that.  If you are a student of automation and you can craft useful and valuable tests, you’ll have nothing to fear. Machine learning and AI are just additional tools in your toolbox.  If all you do is turn test cases into automated scripts – well, I’m sorry, but that’s easy and your job security is low. Learn how to write good automation before the machines come for you.

Writing Automation is easy. Writing good automation is hard.  Do hard things!

30 Days of Agile Testing – RED BUILD!

30 Days of Agile Testing – RED BUILD!

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

What actions do we take when there is a red build?  Well, what a timely question!  I’ve just spent the last couple of days trying to figure out a few different build issues.  The story illustrates one set of responses to a red build, but it also shows that there isn’t just one answer to question of what we do when there is a red build.

Thursday I noticed that one of my builds used to check on a lower level package was red. It wasn’t just one or two tests failing.  Every single test was failing. Clearly something was going badly wrong.  I spent some time digging into it and finally realized that part of the package wasn’t getting extracted correctly during the setup.  After some more time (and frustration), I finally figured out that the issue boiled down to the regression VM using an older version of 7zip.  Apparently the build machine creating the package had been updated to a newer version and so now the old version on the machine I was using couldn’t properly extract the package.  I updated the version of 7zip and re-ran the build. Everything was passing, so I posted an artifact to get picked up in the final build process. Everything is good now right?

Wrong.

Friday morning I came in to find that instead of the build picking up an artifact from Thursday (as it should have), it had picked one up from May?!?! Stop the presses! More sleuthing required!  We stopped the build from progressing and starting digging into it. The problem ended up being another machine that had the wrong version of 7zip installed. This machine had also had not cleaned up properly at some point and so had an old file hanging around that it could (and did) use.  We fixed the 7zip version and updated the scripts to make sure they were correctly cleaning things up and now *touch wood* everything in running smoothly again.

The point of this story is to show that the things we do to deal with red builds varies. Normally we wouldn’t stop all other work and  focus all energy on fixing the build, but in this case the red build was of the ‘Nothing Works!’ category and so the steps taken were more drastic.  In the ‘normal’ day to day red build where a test or two is failing our approach to it would be different.  We would look into it and follow up, but if the issue was small enough we would let the build pipeline continue and just follow up with a fix. Or if we caught the issue early enough, we might just quickly revert a change and things could continue on as expected.  The approach to a red build can’t be strictly prescribed and often requires exploration to figure out.

The lesson? Even when it comes to red builds, the context matters!

30 Days of Agile Testing – Work Tracking

30 Days of Agile Testing – Work Tracking

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

Today’s challenge asks what columns we use on our work tracker or kanban board.  To be honest we don’t use columns at all….

I know, I know, bad us right? Perhaps so. This probably would be something worth trying, but for some reason we have never gone down this road.  I’m not sure why, but it hasn’t risen up as a high priority thing to try.  Perhaps those of you who do use a kanban style of work management could share: does this transform the way your work? Please leave a comment with your experiences!  We are trying to move towards a more agile way of working from a process that is, frankly, quite waterfall in many ways.  Is this something that would be helpful for us in this journey? Trying new things takes time and energy.  Is this something that is worth the time and energy it would take?