30 Days of Agile Testing – Test Documentation

30 Days of Agile Testing – Test Documentation

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

Want to start an argument in the testing community?  There are plenty of ways to do it, but one it to start talking about test cases.  What are they? Should we use them? How should we use them?  The arguments can go on and on.  The reality is there is a lot to think about when it comes to effective test documentation and the discussions around test cases really plays into this a lot. Often the purposes of test cases are seen as showing what work was done and enabling us to go back and check that the code has not regressed.  When thinking about test documentation we need to think about what we are trying to achieve with it, and both of those purposes are important.

Re-running Tests

The question is how do we most effectively achieve them?  Do we need to go back and be able to repeat every test we’ve done?  Heck no.  Think about it.  Let’s say you spend 20 hours one week testing the product, and let’s say those tests are recorded in a way that let’s you go back and re-run them.  What happens next week?  You do 20 more hours of new testing and you run the previous 20 hours of testing – your week is full.  OK so what about the next week?  now you have 40 hours of old testing to get through.  Clearly you will not do it all, and the more new testing you add, the less you will be able to do. Taking the thought process to it extreme conclusion shows us that you cannot reasonably expect to be able to repeat every test you do.  If this is the case, does it make sense to do the overhead of detailing out tests in a way that makes them repeatable?  Nope.  So when it comes down to it, we try to record in a detailed manner only those tests that we know we will want to repeat multiple times, and for us that is done in automated regression tests. Yes, I would consider automation scripts to be test documentation.  They record (document) that testing done, how could they not be?

Demonstrating Coverage

So what about the other part of the equation?  What do I do to show and record what work was done?  I think I’ve mentioned it before, but I primarily use light weight documentation of the work I did.  A few bullet points that show what I hit on and why it was interesting to do that. Some checklists of ideas that were considered.  Notes from discussions with teammates on what the feature does.  The documentation doesn’t need to include a lot.  It needs to include enough for me to have an intelligent conversation about it in the future if  I’m asked and enough to convince myself and others that I have sufficiently tested the product.

Improvements

Nobody is perfect and my test documentation is no exception.  What changes could I do to make it better? I like the way I’m currently doing thing and it seems to fit in well with the context I’m working in so I wouldn’t make any major changes at this point. I think if I were to make any tweaks it would be towards  rolling things up.  How do I better roll up and summarize what I have done so that it is more accessible to others?  Sometimes my notes are written in a way that I understand but are cryptic to others, so I will continue to experiment with small changes to improve on what I’m doing.

How about you?  How do you document your testing?

30 Days of Agile Testing – What are the Customers Saying?

30 Days of Agile Testing – What are the Customers Saying?

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

I spent a bit of time today researching what customers are saying about our product.  It was a very interesting exercise and there are a couple of general insight I learned.

In the first place, it seems that for most of the comments I could find, customers are looking for more features from out product.  There were a lot of comments about ‘can it do X?’  This kind of makes sense as we are working on one of newest products our company has made and so customers that have experience with our other products know that there are a lot of other things that can be done.  Some of them may want the product for it’s ease of use and integration capabilities, but they also want to be able to do some of the stuff they could in our older products.

The other things I noticed was that there was a lot of positive press around our product (good working marketing!) and overall those that use it seem to like the way it works and to be happy with the general paradigm.

So what did I learn?  I think one of the key take-aways from this is that we need to make sure we are investing in code level quality and other initiatives that will allow us to move quickly on adding new features to the code.  But we need to be careful here.  We don’t want to be pulling customers from ourselves.  In other words our target market isn’t people that already buy other version of our companies software.  It is those that use our competitors software.  Where are these feature requests coming from and what kind of customers do they help attract?  We clearly need to keep building out the feature set of our product in a quick and responsive way, but we also need to keep focused on delivering features for the kinds clients we are trying to attract.

It was fun to dig around a bit and see what customers are saying about us.  We get some of this kind of feedback through product update meetings, but it really was fun to go out and find some of it on my own.

30 Days of Agile Testing – Application Logs

30 Days of Agile Testing – Application Logs

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

I’ll be honest, this one is going to be pretty short.  When it comes to application logs I know my way around them in general.  Our product has its shortcomings, but we do a pretty good job of producing the logs that we need, and for the areas of the product that I test, I know quite well were to find them and how to read them. Often I have had developers asking me where to find information. Many developers only know the one specific log they write to and sometimes issues require figuring out what other areas of the system are doing.  I got this one.  Want to know where the logs are? Give me a shout. 🙂

30 Days of Agile Testing – Code Review

30 Days of Agile Testing – Code Review

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

For today’s challenge, we were to pair with a developer on a code review. I’m obviously not writing about my experiences of today (I don’t know about you, but I don’t go into work on Saturdays), but I have done this in the recent past.

In a code review I participated in recently, there were some things I didn’t understand. The developer had commented that I could test the method with a particular call, but I didn’t see that caller anywhere in the code.  I did a video call with the developer and we had a good chat about some of the things I was confused on (his comment had referenced the wrong method), and we also talked about how the particular workflow we were trying to achieve only required a part of the code that he had in place.  He ended up removing a part of the code (less testing work – yay!).  Overall the experience was a very positive one and, I’ll be honest, I was quite surprised by how effective I was able to be in the review.

I’ve never looked at C# code before a few months ago and I still have very little idea of how it works and certainly could not actually write any of it without a good bit of research and work on my part. Yet I was able to be an effective, active participant in the code review.  By understanding basic software engineering logic like loops and conditionals and by reading the variable and method names I was able to figure out enough of what was going on in the product to ask moderately intelligent questions. Those questions were enough to open up a discussion and lead to a better shared understanding of what was going on.

Personally I have found reading code reviews to be very helpful and I have also found that I am on occasion, able to participate in them in a useful way.  If you have the chance, take a few minutes out of your day now and then to participate in a code review.  You might just be surprised at how useful it is and how effective you are at.

30 Days of Agile Testing – Talking About Bugs

30 Days of Agile Testing – Talking About Bugs

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

Talk to a developer about a bug instead of logging it in the tracking system.  If I am working on a feature that is under active development, I usually do this in some form. Most of the time if I go straight to the bug tracking system it’s because I found a bug in ‘completed’ features and so it will need to be prioritized against other work. If I’m testing something that is still being developed I will usually communicate with the developer through other means.  Some of the ways we do this is through chatting them, or doing a hangout, or via a spreadsheet template.

In fact, using spreadsheets is one of the most common ways we communicate potential issues.  We have a template that includes categories like ‘question’ or ‘UI feedback’ in addition to ‘defects.’ Doing this through a spreadsheet allows us to take a lightweight approach and also make cross role collaboration much easier.  We can easily tag the product owner or the documentation person for input as needed.  In the bug tracking system the issue usually boils down to something just between the tester and developer as the bug can really only be assigned to one person at a time, but in google sheets we can easily tag multiple people on one issue and have a discussion about it.

In summary, I have long prioritized conversation over documentation when it comes to bugs and I would highly recommend that approach for anyone!

30 Days of Agile Testing – Visualizing Tests

30 Days of Agile Testing – Visualizing Tests

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

Today’s challenge asks me to find a visual way of representing my tests.  There are many, many different ways to do this and I often use drawings of some sort to help me figure out things I don’t understand.  Here is one I did when trying to understand the different types of integration tests we had and what they did and covered.  This is obviously a massive over-simplification of our software but it was helpful for figuring out what we were actually able to check with the different types of tests and also for helping to figure out where we might be able to change our testing approaches.  By visualizing what was going on, I was able to understand what kind of changes could make sense in our system.

20170907_081424

30 Days of Agile Testing – Exploratory Testing

30 Days of Agile Testing – Exploratory Testing

Note that this post is part of a series where I am ‘live blogging’ my way through the ministry of testing’s 30 days of Agile Testing challenge.

What does my exploratory testing look like?  I have tried a few different approaches to it and my ‘process’ around it continues to develop and change as I try new things, but right now it looks something like this.

Starting

When I start testing a new feature I first do a reconnaissance session where I just try it out and see what it does and how it works and what ways I can gather more information about it (logs generated etc.).  By this point hopefully I have had a conversation with the developer and I have pretty good idea of what is involved in this feature.

Often during this first session I will find a few issues and many times while chasing down those bugs I end up in a ‘bug rabbit hole’ where I find new issues while trying to explore around an issue to reproduce it.  To help me find my way back out of the hole, I leave myself little signpost along the way in the form of notes jotted down in my notebook about where I branched.  Basically these are very short reminders to myself that there was a goal I was after which I had been distracted from.  This way I can make sure to come back later and continue on to that original goal.

Tracking

At this point, I’ll have a decent idea of what is involved in the feature and I’ll make up a list in a spreadsheet of a bunch of test ideas that I want to consider. For me this list is made up of short phrases that range from a couple words to a sentence or two that serve as indicators of the kinds of things we need to dig into or think of as we test this.  I’ve started doing this in a spreadsheet rather a mind map or some other format as this seems to be the easiest way to collaborate on the testing.  Often the developer and other testers will be pulled in to work on or discuss the testing and using an online spreadsheet makes it easy to track who is looking at what and what kinds of things have been found and discussed.

Documenting

In terms of tracking or recording my exploratory testing, I usually write down a few notes and comments on the kinds of things I’ve tested and those together with the test ideas make up the documentation of the testing performed.  I than add this information as a test case in the user story for this feature and go through the required QA procedures from there.

Comparisons

I know other people will take a more rigorous session based test management approach to exploratory testing and some will approach it with mind maps or do more planning up front. I have tried different approaches over the years but I don’t worry too much about what other people do for their testing except as things I might experiment with, because at the end of the day my approach has to be something that works well for me in making me an effective tester.

What works for you?  How do you explore the product?