Online testing conference

Yesterday and today, I had the privilege of attending some of the talks of the first ever 100% online testing conference – the online test conf.  It was an interesting experience.  I was unable to to fully commit to the entire experience as I was not given the day off and so had to work between (and sometimes during) the talks I attended.  Overall while I can see the appeal of something like this as it much cheaper to not have to have many people (and speakers) travel to one location, there are some downsides as well.

One of may favorite aspects of conferences is meeting other testers in other contexts and getting excited about testing together, and it seems that the online experience does not do this as well as an in person conference (although the slack channel helped with that).  One of the other things is the ease of dropping in and out of a talk. This may seem like a good thing as you can easily (without social pressures) drop out of a talk that isn’t relevant for you, but it also means that you can easily get distracted or have something ‘important’ come up which causes you to drop of talks that could be very helpful.

In terms of the actual talks I learned some really great things from some of them.  My favorite talks were Rob Lambert’s talk on the traits of employable people – I really got a lot of actionable items out of that one – and Adam Knight’s talk on risk.  Both of those talks gave me a lot to chew on and think about and apply in my life as a tester.

I’m glad I could (kind of) attend this conference and I’m thankful for Joel Montvelisky and his group for putting this together.  It’s exciting to see and try new things, but I’ll still be asking my boss for money to attend some ‘in person’ conferences as well 🙂

Dealing with losing a team member

Our team was already tight.  Several members had been shifted to work on other projects and so we were pretty well down to a bare bones team.  And then it happened.  One of the testers took a job in another city since he wife had a job there.  At this point in time it does not look likely that we will get approval to replace him, and so we are faced with the question of what to do.  We have lost 1 tester on a team that had 4 full time and one part time tester.  At first glance this seems like a pretty serious problem.  We just lost almost 25% of the team.  How are we going to be able to keep up on the work that needs to be done?  However, after some reflection, I think this is actually an opportunity in disguise.  It is a new constraint for us and and constraints lead to creativity.  We have already seen some helpful and healthy shifts where developers will be taking on more of the regression maintenance work and where we are being forced to look carefully at our process from a full team perspective to see what things we can spread around the team more.

We all agree that having high quality is an important factor, and this constraint is forcing us to ask the questions about how we are going to get high quality.  It is making us as a team face the fact that quality is indeed something that has to belong to the team as a whole and not something we can just rely on testing to take care of.  So while it is certainly hard to lose a team member and while it can be very painful to have to transition to different ways of thinking, there are certainly some benefits to be had here as well if we are only willing to grab them. We need to let the creative juices flow on how we can deal with this constraint and how we can work together as a team to achieve our goals.

Harnessing Serendipity through Shape Shifting

How do you harness serendipity in your testing?  Recently I have been migrating a number of tests from one regression system to another, and in the process of doing this I have a found several defects that highlighted areas we were not testing at all.  When I started this migration, I was not really expecting to find any issues, as we should have been just ‘copying’ tests from one system into the other.  This exercise has highlighted some of the limitations of automated testing as compared to more human involved testing – which is  topic for another day – but one of the biggest takeaways for me in this is the serendipity of finding defects.

I wasn’t looking for bugs in particular and yet I found many important bugs and it seems that this is a common pattern in my testing.  Often many important bugs are found while trying to do something else.  Sometimes that is finding bugs that are unrelated to the feature or area I’m testing.  Sometimes that is finding bugs while trying to do a demo or to walk someone else through the product.  Sometimes that is finding bugs while doing something like this migration that seems incidental to bug hunting.

In thinking about this I wonder if there are ways we can harness this serendipity?  Are there things I can do as a tester to make it more likely that I find these things?  Can I deliberately put myself in contexts that make it more likely that I will run into this stuff?

I think they answer is that yes I can!  I’m reading through Elisabeth Hendrickson’s book Explore It! and she has lots of ideas in there about changing your context or way of thinking about the system so as to make it more likely to run into these serendipitous bugs.  Exploring things from different angles and with deliberate context shifts will certainly help to find these things.  One thing I wonder though is if I can do this for a particular feature.  Many of the approaches and tactics for exposing ourselves to this type of serendipity seem to apply better to the product holistically and less to checking a particular feature.  We do of course want the holistic approach, but when I’ve been assigned to work on testing a particular feature, how do I expose myself to finding these totally unexpected insights about this feature?

At this point, I’m just thinking out loud some ideas that I will need to try in the next weeks and months, but perhaps one of the things I can try is situating this feature inside workflows.  This would mean starting with working through several ‘likely’ workflows in which users might exercise the feature and then using those workflows to work out the states the feature might go through and then using those states to help me dive deep into the feature in those particular areas.

I usually ask developers what their thoughts are on how to test a feature before I get started, so perhaps I could flesh that out a bit more and instead of just asking them an open ended question, I could come to them and say ‘this is my understanding of how the feature works.  Does that match your understanding of it?’  By exposing my ignorance I will hopefully be able to gain better information of areas I don’t understand as well and I might also gain insight into areas that are less clear for the developers as well.

One final idea (for today) is that I could create artificial constraints on the feature.  For example only allowing myself to interact with it using the underlying command line interface, or only interacting with it via right click options in the UI.  By creating these constraints for myself I’ll end up forcing myself to come up with creative ways of doing things that I would not otherwise think of.

I think there are many ways to increase exposure to this kind of serendipitous discovery, and I guess it mostly boils down to breaking out of the ruts and habits of approaches to testing the product.  In thinking about this now, I think even the ideas given here won’t work in the long run either if they become habits.  The only way to keep stumbling onto stuff like this is to keep changing the things that I do and the ways that I approach testing.  I can of course, have my favorite approaches, but to stick exclusively to them, will mean that I will fail to find out many interesting things about the product.  The way to harnessing serendipity is to become a shape shifter.