30 Days of Agile Testing – RED BUILD!

30 Days of Agile Testing – RED BUILD!

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

What actions do we take when there is a red build?  Well, what a timely question!  I’ve just spent the last couple of days trying to figure out a few different build issues.  The story illustrates one set of responses to a red build, but it also shows that there isn’t just one answer to question of what we do when there is a red build.

Thursday I noticed that one of my builds used to check on a lower level package was red. It wasn’t just one or two tests failing.  Every single test was failing. Clearly something was going badly wrong.  I spent some time digging into it and finally realized that part of the package wasn’t getting extracted correctly during the setup.  After some more time (and frustration), I finally figured out that the issue boiled down to the regression VM using an older version of 7zip.  Apparently the build machine creating the package had been updated to a newer version and so now the old version on the machine I was using couldn’t properly extract the package.  I updated the version of 7zip and re-ran the build. Everything was passing, so I posted an artifact to get picked up in the final build process. Everything is good now right?

Wrong.

Friday morning I came in to find that instead of the build picking up an artifact from Thursday (as it should have), it had picked one up from May?!?! Stop the presses! More sleuthing required!  We stopped the build from progressing and starting digging into it. The problem ended up being another machine that had the wrong version of 7zip installed. This machine had also had not cleaned up properly at some point and so had an old file hanging around that it could (and did) use.  We fixed the 7zip version and updated the scripts to make sure they were correctly cleaning things up and now *touch wood* everything in running smoothly again.

The point of this story is to show that the things we do to deal with red builds varies. Normally we wouldn’t stop all other work and  focus all energy on fixing the build, but in this case the red build was of the ‘Nothing Works!’ category and so the steps taken were more drastic.  In the ‘normal’ day to day red build where a test or two is failing our approach to it would be different.  We would look into it and follow up, but if the issue was small enough we would let the build pipeline continue and just follow up with a fix. Or if we caught the issue early enough, we might just quickly revert a change and things could continue on as expected.  The approach to a red build can’t be strictly prescribed and often requires exploration to figure out.

The lesson? Even when it comes to red builds, the context matters!

Advertisements

30 Days of Agile Testing – Work Tracking

30 Days of Agile Testing – Work Tracking

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

Today’s challenge asks what columns we use on our work tracker or kanban board.  To be honest we don’t use columns at all….

I know, I know, bad us right? Perhaps so. This probably would be something worth trying, but for some reason we have never gone down this road.  I’m not sure why, but it hasn’t risen up as a high priority thing to try.  Perhaps those of you who do use a kanban style of work management could share: does this transform the way your work? Please leave a comment with your experiences!  We are trying to move towards a more agile way of working from a process that is, frankly, quite waterfall in many ways.  Is this something that would be helpful for us in this journey? Trying new things takes time and energy.  Is this something that is worth the time and energy it would take?

30 Days of Agile Testing – Learning Culture

30 Days of Agile Testing – Learning Culture

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

Today’s challenge is about contributing to the learning culture in my company.  I work at a pretty big company that is split into a lot of different divisions so I don’t know that I can speak to the learning culture of the company as whole.  Instead I will focus in on the learning culture of the testing team I am a part of.

Testing Team Learning Culture

On our testing team, we approach learning in a few different ways.  We are currently a distributed team spread across 4 offices and so we try to foster learning in ways that accommodate this distributed nature. For example, occasionally during team meeting we will discuss an article that we have all read.  This helps us think about things that might be outside of what we usually do and lets us discuss different viewpoints and approaches to testing.

We also use retrospectives as a learning opportunity to try and see how we can grow and learn from shared problems.  Looking back on problems we have faced as individuals and discussing together ways to address them or think about them is a very helpful learning tactic

Another things we have tried recently is sharing ‘tips and tricks’ at our weekly team meeting.  This is a way to share a quick little tip or tool that you have come across that might be helpful to other testers.

One other, very important way we foster a learning culture is through group testing sessions.  We use these sessions as an opportunity to learn new things about the product and to help each other get better at interacting with and testing the product in various ways.  This also gives us the opportunity to observe other people testing (we use screen sharing during these sessions) and thus to learn from their actions in that way as well.

As a testing team we realize that an ongoing commitment to learning is an important part of become an ever better tester and so we invest time into this.  Don’t get complacent with where you are.  Keep on learning!

30 Days of Agile Testing – Zero Bug Tolerance

30 Days of Agile Testing – Zero Bug Tolerance

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

Could we do it?  Could we get to a zero bug tolerance on my team?  Well, anything is possible.  I’m sure we could, if we wanted to badly enough. I have even toyed with bringing the idea up in the past, but the problem is that there are many things I want to do and change, but there is only time for so many things. Change is hard and overloading on change will just make you fail at all the changes.  We live in an information saturated world, and one of the biggest challenges anyone faces is the challenge of filtering.  How do we filter out the less important information to find the more important?  There are so many good things to try and to do, but there are only so many hours in a day.

For now I am focused on other things and spending the time on advocating for a zero bug policy just isn’t something I see as being valuable enough (at this time) for me to put the energy it would require into it.  I am fascinated by the idea though and hope that someday we’ll get to the point where we can consider this, but for now – bigger fish to fry.

30 Days of Testing – Test Plan

30 Days of Testing – Test Plan

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

What does my test plan look like?  Well I’ll keep this simple.  It looks like this at the beginning of testing a new feature.

Test Idea
Initial exploration
Does it do thing A?
What about B?

And this is what it looks like part of the way through testing:

Test Idea
Initial exploration
Does it do thing A?
What about B?
Are there any issues with this type of interactions and inputs?
I wonder what happens if I X
Do E,F and G interact with this?

And then as I get close to the end it looks like…well you get the picture right?  The list keeps expanding as I learn more about the feature and as I think of things and try things. I don’t spend a lot of time planning up front. Instead I usually just start with what I have and then pause to plan and re-plan frequently along the way.  Sometime I will take more time to plan out and think through a particular type of coverage if is really important and other times I’ll just merrily go along my way letting my interactions with the product lead me. I any case, I try to keep it as simple and lightweight as possible. The plan itself has very little end value.  It is a tool to help achieve something and I like light and lean tools!

30 Days of Agile Testing – What Can’t be Automated?

30 Days of Agile Testing – What Can’t be Automated?

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

I enjoy automation, and I like to talk about how to make it better.  I do that a lot on this blog, and in fact, I’m giving a talk on that very subject tomorrow. However, today’s challenge is to think about what can’t be automated so let’s dig into that.

Usability

Yes, there are many aspects of usability that automation can help us with.  For example gathering data on your actual customer usage patterns is very helpful for this, but with current technology we still need a lot of human involvement.  If you aren’t using automation to help you with this, you are really missing out, but I think it will be a long time before we see this kind of work being fully automated.

Intuition

Sometime I just know there will be bugs what I try certain things.  How?  I’m not quite sure.  It’s probably a combination of knowing the product, past bad experiences with certain things, knowing how the developer tends to write his code, what other things have recently gone into the code, etc. Whatever it is, there is some intuition going on that I can’t easily automate.  This intuition helps guide my testing and often makes me much faster and more efficient than automation at finding problems.

Bug Advocacy

Finding a bug can be the easy part.  Demonstrating why it matters?  That’s often difficult. One of the things I notice with new testers is that often their bug reports are too factual. Do a, b and c and you get d.  The problem is there is no information about why is undesired or wrong and why we should go about fixing it.  Depending on your team dynamics, this can be an important skill to have.

Figuring out what to do

I can automate a lot of things, but notice that there is an active agent at the start of this sentence.

I.

I can automate a lot of things.  I have to decide what to automate and what to test and what other things to work on. I need to set up the goals we are trying to accomplish in the first place. Automating that job away will take some serious effort.

Communication

Again tools help us here, but we still need a lot of human to human interaction to build software and we can’t just automate that away. Sometimes it feels like we try to do that, but my ability to pull together people and resources and ideas from different areas and synthesize them into something that we as a team can use to get better at doing our jobs, is not something that can be automated.

 

30 Days of Testing – We Aren’t the Only Ones

30 Days of Testing – We Aren’t the Only Ones

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

What kind of testing do others on the team do?  Sure we as testers are tasked with the testing, but we know that others test and in fact, we actively work on helping other improve their testing.  What testing do others do?

We’ve actually had some discussions on our team recently on this in terms of how do we all communicate with each other about what testing has been done.  The developers do some testing of their code, both via adding or running unit tests and by running integration tests against new changes. Developers and product owners also do additional interactive testing at times and others, like our customer engagement team, also use and test the product.  With so many different people testing the product, how do we coordinate the testing so that we are being efficient? This falls into our test management strategy and so we use things like simple shared spreadsheets, that can track at a very basic level, some of the things that have been done.

We also do group testing sessions where we can help and observe each other testing. This helps to teach those with less testing experience on things they can do to improve their testing and also helps us generate new testing ideas. The more collaboration we have with other members of the team, the more testing we can see being done in various ways and areas.  Developers do a lot of testing.  Product owners do a lot of testing. Documentation writers do a lot of testing.  Many people are testing our products and that is a wonderful thing.  As someone who spends a lot of time thinking about good testing and how to be effective at it, I can help and encourage this.  Quality is a team sport!