30 Days of Agile Testing – Introduction

30 Days of Agile Testing – Introduction

The Ministry of Testing has been running 30 days of testing challenges on various aspects of testing.  This coming month I thought I’d try my hand at it.  The way these challenges work is that you work through a list of 30 different challenges and try to check off one challenge each day of the month.

The challenge for September revolves around Agile testing.  The company I work at is, shall we say, nominally agile, but I could certainly stand to grow and learn in this area.  I hope to follow along and ‘live blog’ with my progress each day (although I can’t promise how things will go on weekends).  Some of the items on the list will be quite challenging in my context and some of them look like things that might take more than a day to do, but I’ll try my best to keep up.

If we want to get better we need to push ourselves and since we live in a world where we are pulled in fifty different directions at once, having a goal like this to work on for the coming month will be good for me. It will give focus and direction to my learning and push me to try things I wouldn’t otherwise do. I hope to grow and learn a lot through this.  Always be learning.

When Nice People Surprise you

When Nice People Surprise you

“Oh wow, he’s actually a really nice guy.”

The though flitted across my mind as I was chatting to him for the first time.  I had heard things about this team and the way they worked.  The teams I work with have their challenges and struggles, but from all accounts this team was even more dysfunctional.

As the thought went through my head, the stark harshness of it made me realize that I had been equating poor development practices and approaches with poor intelligence and even poor character. But as I talked to this guy, I realized that here was a highly intelligent, pleasant, likable person.  Why had I assumed that he would be unpleasant or annoying?  Simply because I couldn’t imagine how someone ‘like me’ could use the approaches his team was using.  I couldn’t imagine that a ‘nice guy’ could think that a process I saw as fundamentally flawed was a good and healthy process. I just couldn’t imagine that ‘wrongness’ (in my mind) and ‘niceness’ could go together.

And don’t we all do that in so many areas of life?  Aren’t we so quick to demonize those that have different viewpoints and beliefs than us?  Whether it is in the religious, political or social realms, or even if it is just in talking about different schools of testing, we are very quick to starting demonizing those on the other side.  It is incredibly hard to hold to the truths that someone can be both wrong and a good human being at the same time. Holding those two things in tension makes us uncomfortable.  It is much easier to just dismiss out of hand those that are of a different stripe than us, but the reality is we can learn things from those that are different than us. We will also deal very differently with those we see as fully human, rather than some distorted one-sided image we have in our head.  For healthy change to happen we need to be dealing with ‘real’ people and no caricatures we have invented.

So who are you demonizing?  Who on your team or in your company or elsewhere, are you making out in your mind to be a horrible person not worth dealing with simply because they believe different things than you?  Reach out to them.  You might just be surprised at how nice they are.

Automation Anti-Patterns

Automation Anti-Patterns

You are sitting at your desk staring at a long list of test failures.  Some of them have been failing for a few days (at least you’re trying to convince yourself that it has only been that long), and some of them are new failures. You are trying to figure out if it safe to merge in that bug fix and you just don’t know anymore.  

What went wrong?  Aren’t your tests supposed to help you move more quickly?  Why are they slowing you down? Your job as a tester is to help quickly figure out if there are quality problems so that we can keep producing high quality software.  Your automation is supposed to help you with this but it really isn’t.  What went wrong? And more importantly – what can you do to fix it?

Want to find out more?  Come to my talk on Automation Anti-Patterns and what to do about them.  I’ll be giving the talk right here in my backyard at the KWSQA Targeting Quality Conference.  This is a great conference (I have been several times) and if you are looking for a conference that won’t destroy your budget this is the place to go (seriously only $339!).  The other conference I’m giving this talk at is in Orlando where I am speaking at the Better Software East conference in November.  If you are interested in signing up for that one, you can use my speaker discount code (BE17DW11) to let them know you signed up through me and just like that they’ll take $400 off the conference price for you (and I get entered in a contest for a prize – win/win you know).

So come on out!  Learn how to recognize those patterns in your automation that you really don’t want and learn some strategies on how to deal with these anti-patterns.  I’ve written a lot about how good test automation is hard and this comes from years of experience with it. I’ve seen a lot of test automation problems and have even been able to fix some of them, and so I’ll be sharing some of the lessons I’ve learned.  See you there!

When do you cut your Losses?

When do you cut your Losses?

I was getting frustrated. I thought it would be a simple tweak to add this check to an existing test. All I wanted to do was duplicate an object use it in two different places and make sure the display name was the same in both of them and then delete it.  It should have been quick and easy –  15 minutes at the most, so why was I still working on it four hours later?

It was time to retrospect (well ok – it was past time to retrospect.)  What had gone wrong? Why was I in this situation and how could I avoid being here again in the future?

What I had thought would be a simple tweak to an existing tests turned out to be much more complicated.  When I added it in, the application started crashing during the test run.  “Oh I’ve found a bug.”  But no, when I tried to reproduce it outside of the test system, there was no crash to be seen.  I tried various things and asked some questions and we think we know why this is happening (due to some internal counter getting confused and due to some difference in the way we pass around certain types of errors in the test and live systems).  I was able to figure out a way to add the check to the test, but it had take 4 hours instead of 15 minutes.  Was it worth it?  Was it worth spending that much time on adding a simple little check to a test?

Good – I learned something about the system

Those numbers logged in that object?  Yeah, they mean something, and yes they are magic numbers and yes this area of code is not very well written.  I did learn some important and useful things about this area of the code including that we don’t understand this area very well.

Bad – Four hours is a long time to do what I was doing

The actual check I put in is pretty low value.  It is unlikely that we will fail in this way again as we are checking something that was just fixed.  The only reason I was adding the check in the first place was that I had thought it would be very low cost to do so. Adding something that is low in value is ok if the cost is also low.  Adding something low in value when it takes half my day to do so is not the best use of my time.

On the balance – I shouldn’t have spent this much time

Looking back on it, the time spent for the value gained was not worth it.  Even if you add up the value of the check I added along with the value of the things I learned, I still don’t think it was worth it.  You can always salvage a situation though and that can perhaps be done here by learning how to not repeat this.  What could I have done to realize that it was time to cut my losses? What would have been a better way to approach this situation?

Two Possible Options

Option one is to set a time limit for ‘low value’ stuff like this.  I could have said before starting that this check isn’t very high value and so I won’t spend more than half an hour or an hour working on it.  That probably wouldn’t have worked very well in this case though because I was running into what I thought was a defect (and to be honest kind of is, but just not one we will fix), and so the time that was I spent was mostly on trying to track down the bug.

Option two is to use this as an opportunity to leverage some high value work out of the time.  Part of the reason it took so long was that the test I was added this check to wasn’t the easiest test to debug and I don’t have a system in place for easily figuring out what to do when there are differences between the test system and the ‘live’ system. I didn’t take this approach, but in the future if I run into a similar situation again I will try to use the opportunity to overload what I am doing.

Cut your Losses

So when do you cut your losses? I’m not exactly sure yet, but I think one other thing that needs to be kept in mind is that if something is taking much much longer than originally planned or intended, it needs to trigger a response to stop and think about what is going on.  Am I still solving the same problem?  Is this problem still worth solving?  Should I cut my losses and go?  Should I double down and overload the process to still gain value out of what I’m doing?

We humans have a bias known as the sunk cost fallacy and so it is very easy to get sucked into the vertex of spending more time on something just because we have already put time into it, but sometimes the fastest way forwards is backwards.  If we are digging ourselves into a hole, more digging isn’t going to get us out.  It might be time to stop and start building ladders instead.

Lesson learned:  retrospection is helpful and if I find myself in a time overrun situation for a given task, I should stop to retrospect sooner rather than later!

 

Being In Control

Being In Control

As a parent of three young children, I spend a lot of time telling them what to do and not to do.

“Eat your breakfast”

“Don’t throw your crayons in the toilet”

“Say thank you”

“Don’t hit your sister”

And the list goes on and on.  I’m sure any of us with kids can relate to this.  We spend a lot of time as parents teaching and enforcing rules.  Parenting is hard and (kind of like testing) much of it must be learned on the job. You can read all the books you want, but it isn’t until you are interacting with the hearts and minds of real life humans that you realize just how little you know and how much you have to learn.  One of the things I’ve been thinking about recently is my desire for control and how much of my parenting revolves around that.  I want my kids to do certain things and act certain ways, not in the first place because it will be good for them, but primarily because I want to be able to know and predict what they’ll do.  I want to be able to know that I can go to the grocery store without a meltdown.  I want to be able to sit and read a book without needing to break up a fight.  I want to be able to take my kids out in public and never feel embarrassed by something they say or do.

These aren’t bad things in themselves, but when my primary way of dealing with my kids becomes about how I can control them so that they will do what I want them to when I want them to do it, am I building a healthy relationship?  Is that really the way I want to interact with my kids, and in the long run is that really going to work?  Right now I have the advantage that they are kind of in awe of me and will do what I say (sometimes). That won’t last forever.  If I want my kids to grow up to be kind, generous, contributors to the good of society, are they going to do that through a program of control?

I want to leave the parenting questions for a minute and switch gears.  What about at work?  How much of what we try to do revolves around control?  I want that coder to make the api in this way.  I want that manager to agree to my way of seeing the product. I want that other tester to stop being so focused on detailed up-front test plans and get into exploratory testing.  Why do we get so worked up about stuff like this?  Isn’t it often because of a desire for control?

We don’t like not knowing what is going to happen and so we try our best to control those around us, but just like in parenting, is that the best way to build relationships and see people become the best they can be?  If we get better at playing power games and forcing people to do what we want are we really helping to build a team that will stand out in the long run?  There is a difference between control and influence.  We ought to use our influence for good, but I think we would do well to watch out for our tendency towards control.

When control becomes the focus, relationships can be hurt.  Keep the relationship at the center and try to let go of your need for control.  Not easy to do, I know – I struggle with it too, but let’s work on it together ok?

Getting Fast and Consistent Automation

Getting Fast and Consistent Automation

In my previous post, I promised to expand on how to cleanup your automation so that it could help you be both fast and consistent.  In this post I want to share some principles that might help you turn your automation into something that leverages the speed of a hare with the consistent pace of a tortoise.

We’ll start with the speed.  What can you do so that your automation helps you move more quickly?

Speed Principle 1 – Don’t Fail

Test failures slow you down.  They require debugging and understanding where the failures came from. Now, of course you want your tests to fail when you have introduced defects but realize that every test failure is going to slow you down.

So what do you do if you have a test suite that fails too often?

Could I suggest that step one is figuring how to gather data? What data can you get from your test reporting system? Can you mine anything from git log that will tell you about how often you need to change certain tests? Once you’ve figured out the data part it should be pretty straight forward to figure out which tests are problematic and target those ones for refactoring.  Also if you’ve figured out an easy way to gather the data, you should be able to keep on top of this as the project progresses.  Keeping your tests useful should be an ongoing job.

Speed Principle 2 – Fail sooner

If you really must let your tests fail, the sooner they do so after the offending changeset the better. This applies whether the failure is for a bug or not.  If it is for a bug that was just introduced, the sooner we know about it the easier it is to fix, and if the failure just requires a test update, the sooner we know the easier it is to debug the test and figure out why it is failing and what to update on it.

So how to go about improving this? For tests to fail sooner they need to be run sooner. This means you need to figure out why they are not running right away.  Perhaps it is a deliberate thing where you gradually expose the code to wider audiences and run more tests at each stage. In that case you should also have tests that are less and less likely to fail as you progress through each stage.

However, in many cases I suspect the reason we don’t fail sooner is that our tests take too long to run. We don’t have the resources to run them close to the code changes and so they don’t fail early.  There are a couple of solutions to this.  One is to reduce the run time of your tests, another to increase the machine resources so that you can run more tests.

Speed Principle 3 – Don’t have annoying tests

Tests that are difficult to run or understand are not going to help you move more quickly. If it takes a complex build setup to get your tests to run and only 3 testers on the team know how to run them, do you think they are going to be able to help you move quickly? No, of course not.  We need to have tests that are easy for the whole team to run and use. The usability and accessibility of your test framework is very important

To run quickly you need to have fast feedback and this means that developers need to run the tests.  If you have a separate team writing your automated tests, you will need to pay special attention to making sure these tests are not annoying for the developers to use and run.  Let’s face it, human nature being what it is, they will be unlikely or reluctant to use something they don’t see value in.  Having tests that are difficult to run or that they can’t even understand the output of or that are very difficult to debug will make it much less likely that they get run early.  Without early feedback on the impact of their changes, it will be hard to move quickly

Consistency Principle 1 – Encourage good coding behavior

Integration tests can be somewhat dangerous. They can encourage making tests pass at the expense of good development practices.  Let me explain. Imagine you have two component that need to talk to each other through an API.  If all your tests are at the integration level checking that these two components work together, you could encourage sloppy coding at the interface between the two components.  For example, if a defect comes along that needs to be fixed quickly, the developers might tempted to put in a fix where one component directly calls a class from the other, rather then changing the APIs.  If you have tests that help define the interfaces between these different parts of the products, you will make it harder for people to ‘cheat’ in their coding as the tests themselves will help define the interfaces.

This is just one example of how tests can be used to encourage good coding behaviour. There are other things you could do as well, but this article is already getting too long, so we’ll leave it there.

Consistency Principle 2 – Keep your tests clean

If you want to be consistent you can’t work in spurts.  You can’t go from flying along adding new features one month to being bogged down in test maintenance the next.  If you do this you’ll be like the hare in Aesop’s fable and you’ll end up at the finish line much later that if your moved at a consistent pace.  This means you can’t let test maintenance debt pile up.  You can’t leave those failing tests alone for a week or two because you ‘don’t have time’ to update them.  The longer you wait for this the further behind you get.

There are also more subtle ways you need to be careful of this.  Tests age and start to loose their value over time as the code base changes. You need to regularly spend time looking at how to keep you tests valuable and checking the important things.  If you don’t do this, you will eventually get to the point where you realize your tests really aren’t that helpful and then you’ll have a major job on your hands to clean them up.  Scheduling time to review and making small changes to keep your tests clean will pay off huge dividends in your team’s ability to be able to consistently produce.

Consistency Principle 3 – Keep your tests short

Here’s the thing with long running tests.  When they fail, it takes a long time to figure out what went wrong.  If your test takes 15 minutes to run, it will take at least 15 minutes to debug when it fails, and probably much longer.  How often do you only run a test once to figure out what went wrong?  This might not seem like too big of a deal, but the more long running tests you have, the more time you will end up spending on debugging tests when things go wrong.  This leads to a lot of inconsistency in your delivery times, since things will move along smoothly for quite a while, until suddenly you have to spend a few hours debugging some long running tests.

So those are a few principles that could help you clean up your automation if it is in a bad state.  I have been working on this kind of cleanup over the last couple months and so these are principles that come from my real life experiences.  I will probably be sharing more on this in the coming weeks.

Fast Automation

Fast Automation

Remember that fable about the tortoise and the hare?  The steady plodding tortoise beat out the speedy but inconsistent hare.  I think when we read that parable we respond with yes…but there is another option!  We get the point that being consistent is more important than being fast, but can’t we have our cake and eat it too?  Can’t we have both speed and consistency?

That’s the promise many test automation vendors give.  It’s the selling feature for why we should invest in test automation efforts.  It will let us have the speed without the burnout.  It will let us be consistent with our delivery, but not slow.  We believe that technology allows us to deny the very premise that you have to choose between speed and consistency.  But is it true?  Was Aesop wrong, or are we the ones deluding ourselves?

I think we can prove old Aesop wrong on this one, but when we want to go showing up a master we better not go about thinking it will be easy.  This fable has held up for many years because it represents a fundamental reality that is very difficult to get around.  It is very hard to be both fast and consistent.  We try strapping some automation onto the tortoise so that he can go a bit faster, but we end up loading him down so much that he actually goes slower.  Now what?  Well it would seem that we should just get rid of that automation, but no we are eternal optimists so we try to fix it. We get the tortoise to go faster, but now he gets tired more quickly.  Wow good job.  We’ve manged to turn the tortoise into a hare.  Not quite what we were after is it?

Yes, we can be both fast and consistent.  We know we can, because we have seen some companies do it, but let’s not fool ourselves here.  Much more often that we would like to admit, Aesop is right.  Our automation often ends up making us inconsistent or sometimes even worse, slower.

So the moral of my little story is to just give up and not do automation right?  No!  The point of this post is that doing hard things is hard and to pretend that hard things like test automation are simple is just silly. Combining the speed of a hare with the steady consistency of a tortoise is not an easy thing to do.  Don’t pretend that it is.  Be very careful about how you approach and think about automation.  Make sure your automation solutions are doing what you need them to do.

And if you find yourself in a mess?  If that automation suite you tried to strap onto the back of your tortoise is starting to slow him down even more?

Well, stay tuned for a post about how to create genetically hybrid automation and clean up your automation so that it can help you move at a fast and consistent pace.

The Challenge of Coverage

The Challenge of Coverage

The idea of test coverage is a bit of a holy grail in the software testing world.  When unit testing you’ll often hear about a certain percentage of the code being covered, and with higher level testing you will often hear questions about how well we have covered a feature. As with most things the idea of coverage is a fuzzy idea, but one of the most important lessons I’ve learned about it (from Dorothy Graham in this talk) is to ask the question ‘of what?’ Whenever we realize we are talking about coverage we should be thinking about what we are trying to cover.

It is a very helpful to realize that there are many many forms of coverage. We can never cover all the coverages which is why complete testing of any non-trivial software is impossible, but in some cases we can theoretically cover all (or most) of one type of coverage.  We could for example theoretically get complete line coverage of a code base.

I realized recently though that knowing how to get complete coverage of a particular area can be a bit of double edged sword.  Just because we know how to cover something completely, doesn’t mean we ought to.  In fact, sometimes even using sampling mechanisms like combinatorics testing doesn’t make sense.

I was recently trying to test something that involved the ability to create ‘expressions’ according to certain known rules and inputs.  It was seductive.  I started to make a table of the different ways things could combine together to create different combinations.  I quickly realized that in an n x m matrix like I was dealing with there were many millions of possible combinations and so I started putting in some sampling heuristics to try and reduce the problem space.  As I kept going down this path of creating possible expressions, I eventually realized that I might not be effectively using my time.  Sure I was using a script to help me power through these combinations, but there were little tweaks and changes that needed to be made and then for each combination I would have to run the software for a few seconds.

It was going to take days to get through this all.  Was it worth it?  When I stopped to think about the idea of ‘coverage of what?’ I realized that perhaps I was focusing in on an area where the value of my coverage was low.  There were many other aspects of coverage of this feature that I was not considering because I was so focused on this one area of coverage.  The reality was that the ability to get a high level of coverage in a certain area had seduced me into spending too much time in that area.  Just because I can do something and I know exactly how to do it, doesn’t mean it is the most valuable thing to spend my time on.  I had to leave that area with lower coverage and focus in on other areas instead because the reality was there was a much higher risk of finding problems in those areas.

This is one of the challenges of having measurable coverage.  Some types of coverage are much harder to measure but that doesn’t mean they are less important.  When we have a particular area we can measure it can give us goals to work towards. This can be helpful, but if we let it drive our thinking too much we can easily be doing low value work to meet a coverage goal in the place of some much higher value work on other aspects of coverage.  I think we all tend to bias towards what we know and understand, but don’t forget that it is often in the less explored areas that the nugets of gold are to be found.

Predicting the Future

Predicting the Future

Testers are kind of like fortune tellers.  We need to be able to predict the future, or at least it ought to seem that way.

One of the things people joke about is how mean we testers can be to the product.  We find ways to break things that can be quite surprising to others on the team.  I like to think of that not as being a dream wrecker, but as being a fortune teller.  How did I find that issue?  Well, I went into the future and thought about what kinds of things the users might do.  The kinds of things that require understanding the way humans think and interact with software.  Humans use software to help us accomplish things, but we don’t always do so in a linear fashion of in the ways that those of us who design think we will.

Like a good fortune teller, we testers understand how humans tick and what biases and flaws we have.  We know how to size up a user and anticipate how they will react to the system we are working with.  We know how to make reasonable inferences from small amounts of information.  We know where users are going to stumble and where they are going to be frustrated.  We know all this because we pay attention to both the human element and the technical element.  Software testing sits squarely at the intersection of humans and technology and so as testers we study both.  We understand the technology and we understand the humans, but most of all we understand how they interact with and influence each other.

It may seem like what we do is magic, but much like a fortune teller, it come from years of practice and study.  We have experimented and honed our skills.  We have made predictions and seen where they have been wrong and we have learned from that.  We have been students of our craft and so it can seem like what we do is easy or magic, but the reality is, it is experience, study and practice that has brought us to this place.

In a data driven world, it may seem like we don’t need these skills anymore.  Who needs to be able to predict the future when we can react to it in real time?  But who is going to ask the questions that need to be asked? Who is going to figure out what data to gather? Who is going to be able to look at that data and understand the thinking of the humans behind that data?  The data driven future is not a place where there is no need for these fortune telling testers.  It is a place that will see their skills leveraged in ways that will allow for astounding and amazing things to happen.  It is a world in which testers will be able to move from the fortune teller’s booth at the fair to the big stages of Penn and Teller.  A world in which testers will have resources and data that will allow them to use their skills to bring new value and insights to projects in unanticipated ways.  A world world that will open up new vistas and opportunities as these skills are partnered with new technologies and insights.

It’s a world I look forward to.

 

Consistency and Testability

Consistency and Testability

We were recently discussing this article at a team meeting, and as part of that discussion we were talking about some of the inconsistencies in our product.  One area where we have inconsistencies is in how different parts of the product handle the data coming from the UI.  Depending what kind of problem you are looking at we have radically different paradigms for how we manage that data before sending it down to the low level engines.  At the UI level the product looks fairly consistent, although once in while these under-the-hood difference do show up, but in the data management layer it’s a whole different story.

There are clearly inconsistencies in our product but is it inconsistent in a way that matters?  From the end user perspective it is fairly consistent, but then once you get into the data management layer there are some very big inconsistencies.  Does this matter?  Should we worry about making it consistent? Well, one of the things that struck me during this discussion was that we were in a group of testers who worked on different areas of the product and we would each struggle to do deep testing if we were to switch areas of focus.  I think one of the main reasons it would be difficult for us to move effectively from one area of the product to another is the inconsistencies in the data management layer.  So does this inconsistency matter?  I would argue that yes it does.  In this case it is affecting the testability of the product.

There are many ways in which this kind of inconsistency in the product hurts us.  Let me just rattle off a few of them. The automated tests look very different as you move from one area of the product to another.  The testers end up somewhat tied to a particular area of the product leading to less cross pollination of ideas (although we are making deliberate moves to learn new areas).  When we add new features that are used by multiple areas in the product, the testing of these is greatly increased because we have to check how it works with each of the areas.  It is much more difficult to test shared features like this than it would be if we had a common data management layer. The inconsistencies below the hood on our product certainly affect the testability.

There are initiatives under way to help consolidate some of the data management layer and hopefully this will help with some of the inconsistencies, but I wonder in the meantime what we as testers can do about it?  I think one of the main things we can do is to learn how these various areas work and how they are inconsistent.  We can then use this information in our areas of expertise to talk with developers about the kinds of things that other groups do.  We can be the stitching that pulls the various areas together. Another thing we can do is ask questions.  How do other groups handle this?  What have other teams done to deal this problem?  By asking some questions like this we can help people to think about consistency as we move forward.

Testers need to be advocates for testability in the products we test and sometimes that also means being an advocate for consistency.  How do inconsistencies in your product affect the testability?