30 Days of Agile Testing – Introduction

30 Days of Agile Testing – Introduction

The Ministry of Testing has been running 30 days of testing challenges on various aspects of testing.  This coming month I thought I’d try my hand at it.  The way these challenges work is that you work through a list of 30 different challenges and try to check off one challenge each day of the month.

The challenge for September revolves around Agile testing.  The company I work at is, shall we say, nominally agile, but I could certainly stand to grow and learn in this area.  I hope to follow along and ‘live blog’ with my progress each day (although I can’t promise how things will go on weekends).  Some of the items on the list will be quite challenging in my context and some of them look like things that might take more than a day to do, but I’ll try my best to keep up.

If we want to get better we need to push ourselves and since we live in a world where we are pulled in fifty different directions at once, having a goal like this to work on for the coming month will be good for me. It will give focus and direction to my learning and push me to try things I wouldn’t otherwise do. I hope to grow and learn a lot through this.  Always be learning.

Advertisements

When Nice People Surprise you

When Nice People Surprise you

“Oh wow, he’s actually a really nice guy.”

The though flitted across my mind as I was chatting to him for the first time.  I had heard things about this team and the way they worked.  The teams I work with have their challenges and struggles, but from all accounts this team was even more dysfunctional.

As the thought went through my head, the stark harshness of it made me realize that I had been equating poor development practices and approaches with poor intelligence and even poor character. But as I talked to this guy, I realized that here was a highly intelligent, pleasant, likable person.  Why had I assumed that he would be unpleasant or annoying?  Simply because I couldn’t imagine how someone ‘like me’ could use the approaches his team was using.  I couldn’t imagine that a ‘nice guy’ could think that a process I saw as fundamentally flawed was a good and healthy process. I just couldn’t imagine that ‘wrongness’ (in my mind) and ‘niceness’ could go together.

And don’t we all do that in so many areas of life?  Aren’t we so quick to demonize those that have different viewpoints and beliefs than us?  Whether it is in the religious, political or social realms, or even if it is just in talking about different schools of testing, we are very quick to starting demonizing those on the other side.  It is incredibly hard to hold to the truths that someone can be both wrong and a good human being at the same time. Holding those two things in tension makes us uncomfortable.  It is much easier to just dismiss out of hand those that are of a different stripe than us, but the reality is we can learn things from those that are different than us. We will also deal very differently with those we see as fully human, rather than some distorted one-sided image we have in our head.  For healthy change to happen we need to be dealing with ‘real’ people and no caricatures we have invented.

So who are you demonizing?  Who on your team or in your company or elsewhere, are you making out in your mind to be a horrible person not worth dealing with simply because they believe different things than you?  Reach out to them.  You might just be surprised at how nice they are.

Automation Anti-Patterns

Automation Anti-Patterns

You are sitting at your desk staring at a long list of test failures.  Some of them have been failing for a few days (at least you’re trying to convince yourself that it has only been that long), and some of them are new failures. You are trying to figure out if it safe to merge in that bug fix and you just don’t know anymore.  

What went wrong?  Aren’t your tests supposed to help you move more quickly?  Why are they slowing you down? Your job as a tester is to help quickly figure out if there are quality problems so that we can keep producing high quality software.  Your automation is supposed to help you with this but it really isn’t.  What went wrong? And more importantly – what can you do to fix it?

Want to find out more?  Come to my talk on Automation Anti-Patterns and what to do about them.  I’ll be giving the talk right here in my backyard at the KWSQA Targeting Quality Conference.  This is a great conference (I have been several times) and if you are looking for a conference that won’t destroy your budget this is the place to go (seriously only $339!).  The other conference I’m giving this talk at is in Orlando where I am speaking at the Better Software East conference in November.  If you are interested in signing up for that one, you can use my speaker discount code (BE17DW11) to let them know you signed up through me and just like that they’ll take $400 off the conference price for you (and I get entered in a contest for a prize – win/win you know).

So come on out!  Learn how to recognize those patterns in your automation that you really don’t want and learn some strategies on how to deal with these anti-patterns.  I’ve written a lot about how good test automation is hard and this comes from years of experience with it. I’ve seen a lot of test automation problems and have even been able to fix some of them, and so I’ll be sharing some of the lessons I’ve learned.  See you there!

When do you cut your Losses?

When do you cut your Losses?

I was getting frustrated. I thought it would be a simple tweak to add this check to an existing test. All I wanted to do was duplicate an object use it in two different places and make sure the display name was the same in both of them and then delete it.  It should have been quick and easy –  15 minutes at the most, so why was I still working on it four hours later?

It was time to retrospect (well ok – it was past time to retrospect.)  What had gone wrong? Why was I in this situation and how could I avoid being here again in the future?

What I had thought would be a simple tweak to an existing tests turned out to be much more complicated.  When I added it in, the application started crashing during the test run.  “Oh I’ve found a bug.”  But no, when I tried to reproduce it outside of the test system, there was no crash to be seen.  I tried various things and asked some questions and we think we know why this is happening (due to some internal counter getting confused and due to some difference in the way we pass around certain types of errors in the test and live systems).  I was able to figure out a way to add the check to the test, but it had take 4 hours instead of 15 minutes.  Was it worth it?  Was it worth spending that much time on adding a simple little check to a test?

Good – I learned something about the system

Those numbers logged in that object?  Yeah, they mean something, and yes they are magic numbers and yes this area of code is not very well written.  I did learn some important and useful things about this area of the code including that we don’t understand this area very well.

Bad – Four hours is a long time to do what I was doing

The actual check I put in is pretty low value.  It is unlikely that we will fail in this way again as we are checking something that was just fixed.  The only reason I was adding the check in the first place was that I had thought it would be very low cost to do so. Adding something that is low in value is ok if the cost is also low.  Adding something low in value when it takes half my day to do so is not the best use of my time.

On the balance – I shouldn’t have spent this much time

Looking back on it, the time spent for the value gained was not worth it.  Even if you add up the value of the check I added along with the value of the things I learned, I still don’t think it was worth it.  You can always salvage a situation though and that can perhaps be done here by learning how to not repeat this.  What could I have done to realize that it was time to cut my losses? What would have been a better way to approach this situation?

Two Possible Options

Option one is to set a time limit for ‘low value’ stuff like this.  I could have said before starting that this check isn’t very high value and so I won’t spend more than half an hour or an hour working on it.  That probably wouldn’t have worked very well in this case though because I was running into what I thought was a defect (and to be honest kind of is, but just not one we will fix), and so the time that was I spent was mostly on trying to track down the bug.

Option two is to use this as an opportunity to leverage some high value work out of the time.  Part of the reason it took so long was that the test I was added this check to wasn’t the easiest test to debug and I don’t have a system in place for easily figuring out what to do when there are differences between the test system and the ‘live’ system. I didn’t take this approach, but in the future if I run into a similar situation again I will try to use the opportunity to overload what I am doing.

Cut your Losses

So when do you cut your losses? I’m not exactly sure yet, but I think one other thing that needs to be kept in mind is that if something is taking much much longer than originally planned or intended, it needs to trigger a response to stop and think about what is going on.  Am I still solving the same problem?  Is this problem still worth solving?  Should I cut my losses and go?  Should I double down and overload the process to still gain value out of what I’m doing?

We humans have a bias known as the sunk cost fallacy and so it is very easy to get sucked into the vertex of spending more time on something just because we have already put time into it, but sometimes the fastest way forwards is backwards.  If we are digging ourselves into a hole, more digging isn’t going to get us out.  It might be time to stop and start building ladders instead.

Lesson learned:  retrospection is helpful and if I find myself in a time overrun situation for a given task, I should stop to retrospect sooner rather than later!

 

Being In Control

Being In Control

As a parent of three young children, I spend a lot of time telling them what to do and not to do.

“Eat your breakfast”

“Don’t throw your crayons in the toilet”

“Say thank you”

“Don’t hit your sister”

And the list goes on and on.  I’m sure any of us with kids can relate to this.  We spend a lot of time as parents teaching and enforcing rules.  Parenting is hard and (kind of like testing) much of it must be learned on the job. You can read all the books you want, but it isn’t until you are interacting with the hearts and minds of real life humans that you realize just how little you know and how much you have to learn.  One of the things I’ve been thinking about recently is my desire for control and how much of my parenting revolves around that.  I want my kids to do certain things and act certain ways, not in the first place because it will be good for them, but primarily because I want to be able to know and predict what they’ll do.  I want to be able to know that I can go to the grocery store without a meltdown.  I want to be able to sit and read a book without needing to break up a fight.  I want to be able to take my kids out in public and never feel embarrassed by something they say or do.

These aren’t bad things in themselves, but when my primary way of dealing with my kids becomes about how I can control them so that they will do what I want them to when I want them to do it, am I building a healthy relationship?  Is that really the way I want to interact with my kids, and in the long run is that really going to work?  Right now I have the advantage that they are kind of in awe of me and will do what I say (sometimes). That won’t last forever.  If I want my kids to grow up to be kind, generous, contributors to the good of society, are they going to do that through a program of control?

I want to leave the parenting questions for a minute and switch gears.  What about at work?  How much of what we try to do revolves around control?  I want that coder to make the api in this way.  I want that manager to agree to my way of seeing the product. I want that other tester to stop being so focused on detailed up-front test plans and get into exploratory testing.  Why do we get so worked up about stuff like this?  Isn’t it often because of a desire for control?

We don’t like not knowing what is going to happen and so we try our best to control those around us, but just like in parenting, is that the best way to build relationships and see people become the best they can be?  If we get better at playing power games and forcing people to do what we want are we really helping to build a team that will stand out in the long run?  There is a difference between control and influence.  We ought to use our influence for good, but I think we would do well to watch out for our tendency towards control.

When control becomes the focus, relationships can be hurt.  Keep the relationship at the center and try to let go of your need for control.  Not easy to do, I know – I struggle with it too, but let’s work on it together ok?

Getting Fast and Consistent Automation

Getting Fast and Consistent Automation

In my previous post, I promised to expand on how to cleanup your automation so that it could help you be both fast and consistent.  In this post I want to share some principles that might help you turn your automation into something that leverages the speed of a hare with the consistent pace of a tortoise.

We’ll start with the speed.  What can you do so that your automation helps you move more quickly?

Speed Principle 1 – Don’t Fail

Test failures slow you down.  They require debugging and understanding where the failures came from. Now, of course you want your tests to fail when you have introduced defects but realize that every test failure is going to slow you down.

So what do you do if you have a test suite that fails too often?

Could I suggest that step one is figuring how to gather data? What data can you get from your test reporting system? Can you mine anything from git log that will tell you about how often you need to change certain tests? Once you’ve figured out the data part it should be pretty straight forward to figure out which tests are problematic and target those ones for refactoring.  Also if you’ve figured out an easy way to gather the data, you should be able to keep on top of this as the project progresses.  Keeping your tests useful should be an ongoing job.

Speed Principle 2 – Fail sooner

If you really must let your tests fail, the sooner they do so after the offending changeset the better. This applies whether the failure is for a bug or not.  If it is for a bug that was just introduced, the sooner we know about it the easier it is to fix, and if the failure just requires a test update, the sooner we know the easier it is to debug the test and figure out why it is failing and what to update on it.

So how to go about improving this? For tests to fail sooner they need to be run sooner. This means you need to figure out why they are not running right away.  Perhaps it is a deliberate thing where you gradually expose the code to wider audiences and run more tests at each stage. In that case you should also have tests that are less and less likely to fail as you progress through each stage.

However, in many cases I suspect the reason we don’t fail sooner is that our tests take too long to run. We don’t have the resources to run them close to the code changes and so they don’t fail early.  There are a couple of solutions to this.  One is to reduce the run time of your tests, another to increase the machine resources so that you can run more tests.

Speed Principle 3 – Don’t have annoying tests

Tests that are difficult to run or understand are not going to help you move more quickly. If it takes a complex build setup to get your tests to run and only 3 testers on the team know how to run them, do you think they are going to be able to help you move quickly? No, of course not.  We need to have tests that are easy for the whole team to run and use. The usability and accessibility of your test framework is very important

To run quickly you need to have fast feedback and this means that developers need to run the tests.  If you have a separate team writing your automated tests, you will need to pay special attention to making sure these tests are not annoying for the developers to use and run.  Let’s face it, human nature being what it is, they will be unlikely or reluctant to use something they don’t see value in.  Having tests that are difficult to run or that they can’t even understand the output of or that are very difficult to debug will make it much less likely that they get run early.  Without early feedback on the impact of their changes, it will be hard to move quickly

Consistency Principle 1 – Encourage good coding behavior

Integration tests can be somewhat dangerous. They can encourage making tests pass at the expense of good development practices.  Let me explain. Imagine you have two component that need to talk to each other through an API.  If all your tests are at the integration level checking that these two components work together, you could encourage sloppy coding at the interface between the two components.  For example, if a defect comes along that needs to be fixed quickly, the developers might tempted to put in a fix where one component directly calls a class from the other, rather then changing the APIs.  If you have tests that help define the interfaces between these different parts of the products, you will make it harder for people to ‘cheat’ in their coding as the tests themselves will help define the interfaces.

This is just one example of how tests can be used to encourage good coding behaviour. There are other things you could do as well, but this article is already getting too long, so we’ll leave it there.

Consistency Principle 2 – Keep your tests clean

If you want to be consistent you can’t work in spurts.  You can’t go from flying along adding new features one month to being bogged down in test maintenance the next.  If you do this you’ll be like the hare in Aesop’s fable and you’ll end up at the finish line much later that if your moved at a consistent pace.  This means you can’t let test maintenance debt pile up.  You can’t leave those failing tests alone for a week or two because you ‘don’t have time’ to update them.  The longer you wait for this the further behind you get.

There are also more subtle ways you need to be careful of this.  Tests age and start to loose their value over time as the code base changes. You need to regularly spend time looking at how to keep you tests valuable and checking the important things.  If you don’t do this, you will eventually get to the point where you realize your tests really aren’t that helpful and then you’ll have a major job on your hands to clean them up.  Scheduling time to review and making small changes to keep your tests clean will pay off huge dividends in your team’s ability to be able to consistently produce.

Consistency Principle 3 – Keep your tests short

Here’s the thing with long running tests.  When they fail, it takes a long time to figure out what went wrong.  If your test takes 15 minutes to run, it will take at least 15 minutes to debug when it fails, and probably much longer.  How often do you only run a test once to figure out what went wrong?  This might not seem like too big of a deal, but the more long running tests you have, the more time you will end up spending on debugging tests when things go wrong.  This leads to a lot of inconsistency in your delivery times, since things will move along smoothly for quite a while, until suddenly you have to spend a few hours debugging some long running tests.

So those are a few principles that could help you clean up your automation if it is in a bad state.  I have been working on this kind of cleanup over the last couple months and so these are principles that come from my real life experiences.  I will probably be sharing more on this in the coming weeks.

Fast Automation

Fast Automation

Remember that fable about the tortoise and the hare?  The steady plodding tortoise beat out the speedy but inconsistent hare.  I think when we read that parable we respond with yes…but there is another option!  We get the point that being consistent is more important than being fast, but can’t we have our cake and eat it too?  Can’t we have both speed and consistency?

That’s the promise many test automation vendors give.  It’s the selling feature for why we should invest in test automation efforts.  It will let us have the speed without the burnout.  It will let us be consistent with our delivery, but not slow.  We believe that technology allows us to deny the very premise that you have to choose between speed and consistency.  But is it true?  Was Aesop wrong, or are we the ones deluding ourselves?

I think we can prove old Aesop wrong on this one, but when we want to go showing up a master we better not go about thinking it will be easy.  This fable has held up for many years because it represents a fundamental reality that is very difficult to get around.  It is very hard to be both fast and consistent.  We try strapping some automation onto the tortoise so that he can go a bit faster, but we end up loading him down so much that he actually goes slower.  Now what?  Well it would seem that we should just get rid of that automation, but no we are eternal optimists so we try to fix it. We get the tortoise to go faster, but now he gets tired more quickly.  Wow good job.  We’ve manged to turn the tortoise into a hare.  Not quite what we were after is it?

Yes, we can be both fast and consistent.  We know we can, because we have seen some companies do it, but let’s not fool ourselves here.  Much more often that we would like to admit, Aesop is right.  Our automation often ends up making us inconsistent or sometimes even worse, slower.

So the moral of my little story is to just give up and not do automation right?  No!  The point of this post is that doing hard things is hard and to pretend that hard things like test automation are simple is just silly. Combining the speed of a hare with the steady consistency of a tortoise is not an easy thing to do.  Don’t pretend that it is.  Be very careful about how you approach and think about automation.  Make sure your automation solutions are doing what you need them to do.

And if you find yourself in a mess?  If that automation suite you tried to strap onto the back of your tortoise is starting to slow him down even more?

Well, stay tuned for a post about how to create genetically hybrid automation and clean up your automation so that it can help you move at a fast and consistent pace.