Using Scripting to Explore

Using Scripting to Explore

I have often talked about the perils of test automation. I think it is a seductive thing and we fall into those seductive arms far to easily, only to get hurt in the end. There is also a lot of talk about whether testers should be able to code or not. I think you should give it serious consideration, but so often the “testers being able to code” conversation centers on test automation. Here’s the problem. Writing good test automation especially at the UI level is one of the hardest types of coding to do well. I’ve been learning Selenium lately as I’m looking at automation some things for my team.  Getting that first test to work wasn’t that hard. Until.

Until I went in and tried to remove all those manual sleeps I had in there and until I tried to clean it up so that it wouldn’t fail later and until I started to think about how to make it work cross-browser.  There is so much to more to writing good automation than just getting some selenium commands to work. After working on it for a while, I realized that it probably isn’t worth it for this project. It is pretty small and we can get through the regression tests pretty quickly by hand.  However, that doesn’t mean there is no place for automation.  Why does our automation have to be fully and completely hands off? There are some regression tests that we do that could be done much more quickly with the help of some automation.  There are also some areas we know we have niggly bugs that we should be able to pin down with the help of some automation.

You see, I think one of the best places for testers to use automation is for helping with exploration. There are so many questions that take a long time or aren’t possible to get the answer too without the help of some scripting. I think we do ourselves a disservice as testers when we only think of our scripting skills as applying to regression test automation. In light of that, I think it is time for more testers to think about how to leverage programming skills to get better at exploring.

I’ve been working on a course called scripting for testers that will help demonstrate the value of this and show some examples of what this could look like. As I have material that makes sense to share I will put it up here as part of a series I will call exploring with code. Keep an eye out for it. I hope it will be helpful!



Start With What You Know

Start With What You Know

The blank screen stared back at me.  Blank screen are intimidating. Where do you start?  What do you do first?  What if you go the wrong way? Sometimes getting started is the hardest part.

I recently moved out of desktop testing to testing websites. I needed some automation, but – I’ll be honest – I had no idea where to start.  I’d heard of things like selenium and webdriver and many tool vendors, but which tools were going to do what I needed? I was only just started to get a good feeling for how the product itself worked, how was I supposed to pick a tool to use?

Learn to do by doing and start with what you know.  Sometimes simple proverbs can give you the answers.

I know python quite well so I started there. Sure enough, there are some great easy to use libraries that let me run selenium and webdriver. Are these going to be what I use in the long run?  I don’t know. But I do know that I have been learning a ton as I’ve started to ramp up on the python libraries for selenium. I’ve been learning a lot about Selenium, but I have also been learning a lot about how webpages work and how the DOM gets populated and how iFrames work and about javascript and html elements, and the list goes on and on.

If I had tried to make a plan for all the things I needed to learn, I would have missed so much.  By starting with the little that I did know and finding something that intersected with it, I was able to start creating a test. As I did that I ran into issues and started learning about the things I needed to know in order to solve those issues.

Do you remember playing Age of Empires?


You would start out with most of the screen dark and only one little spot of light and as you explored the game more and more would be come light.  This is where I am.  A little spot of light and a whole huge map to explore. There is only one way to find out what is on the map.

Time to explore!



How Should Testers use AI?

How Should Testers use AI?

As a tester I am excited about the possibilities of AI and machine learning.1
I hope that there will be many ways to leverage this technology to level up on our testing powers. As with any tool though, we need to recognize that powerful and magical are two different things. As hardware and machine learning capabilities get more powerful we need to leverage them, but just using them won’t magically produce good results.

If you were smashing up some concrete a jackhammer would be much better than a sledgehammer right?  What if you started using the jackhammer like it was a sledgehammer? Swinging it over your head and hitting the concrete with it. You have a better tool. Are you going to more effective? When we approach new tools we need to use them effectively.  We can’t just use them as if they were the same as the old tools we had.

I recently read an article about how testers could use AI to improve the stability of UI tests. One idea presented was that UI tests could be more robust if we used AI to figure out what a button was.  By using machine learning and image recognition we can figure out if a button is the submit button or the back button even if the way it looks has changed.


Ok, that was a bit rude, but the reality is that if all AI is going to do is make UI automation more robust, I have better things to do than learn how to use it in my testing. There are a lot of easier ways to make UI automation more robust (not least of which is not doing so much of it).  We don’t need AI to help us out here as much as we just need a bit more common sense.  Throwing more technology at a problem that is, at it’s heart, a problem of misunderstanding how to effectively use UI automation won’t help. It will just allow people to add more UI automation without thinking about some other effective ways to test their apps.  To return to the jackhammer metaphor, if someone is smashing up the wrong part of the sidewalk, giving them a better tool won’t help with what matters. They will just smash the wrong thing more effectively.

If you want to stand out from the crowd you’ll need to dig a little deeper. You’ll need to find some uses for AI that are a little more interesting. I’ve just started poking my nose into some of the AI libraries and trying them out, so these are just some brainstorm style ideas I have at this point.  I want to think about this ahead of time and see if it would be something that is worth further investigation.  I’m always on the lookout for new tools to add to my testing toolbox – could machine learning be one of them?

Idea for using AI in my testing

Data Analytics.

The developers need to implement that for me, you might object.  This is true in some areas, but think about it a little longer. What data do most testers have access to? What happens when you run  your test automation? Does it generate any log files? Could it?  Can you cross correlate data in those files with some simple system information? We generate a lot of data during test sessions. Can we get some information out of that data? Do you need the developers to implement advanced telemetry in your app to do this? I think there are a lot of ways machine learning could be used to generate insights into the behavior of your application that do not involve advanced telemetry.

Writing Bug reports

We all know that bugs like company. Where there is one bug there are often more.  What about using machine learning to parse through the bug reports and see if there are patterns to be discerned? Where do the bugs cluster? What kinds of steps/actions frequently show up in bug reports? What phrases are common to bug reports? We have bots that can write a realistic news articles, why shouldn’t we use them to write plausible bug reports?  Will those reports show actual defects? Probably not, but they could generate some great test ideas and ares of focus. One of the biggest challenges in testing is reducing the scope of infinite possibilities in a way that is valuable.  Could we use AI to help us with this?

Writing new scripts

Our current ideas around regression testing involve writing scripts that do the same darn thing every time they are run.  There are reasons we do this, but there are also a lot of problems with this approach. What if we gave a machine learning algorithm the pass/fail data for our tests and started to figure out which ones are likely to find bugs?  What if we took it further and let the AI suggest some new tests?  I think there are a lot of possibilities for automating the finding of regressions in a much more efficient way.


In looking at these things, I think there is a lot of potential for machine learning to help with testing in the future.  However, it seems like most of these things are still too advanced for the individual tester to do.  We will need better machine learning tools, before we will see payoff on investments like this. For now, I intend to learn a bit more about machine learning, but I don’t think it is going to transform too much in my testing in the short term.  I guess we will see again in a year or two where the tools are at.

I really do hope that we see development of creative machine learning tools for testing that break into entirely new and innovative areas. So far much of what I see people talking about using ML for in testing is to do the same things we did in the past, but better – because AI. I’m sure there are some gains to be had in those areas, but I really think we will see AI become a powerful testing tool when we start to use it to do new things that we can’t do at all with our current set of tools.

What do you think? Where is AI going?  How will testers use it in the future?


1. Note that I am being sloppy with my terms in this article and using Machine Learning and AI as interchangeable.

Do Tools Matter?

Do Tools Matter?

Life is different.

I’ve moved from the world of desktop software into the world of web software.  I was working on a team with thousands of integration tests – now I’m on a team that has very few. I was one of the experts on the team – now I’m asking all the newbie questions.  Things have changed a lot. In a new environment there are many new things to learn and it raises a question: is being a good tester about the tools you use, or not?


Being a good tester is not about the tools

I’ve noticed that I was able to very quickly get up to speed on the product and how it worked.  I was able to file several bugs in my first week.  I was able to feel comfortable talking about the different areas of the product. I was able to understand much of what the team was talking about very quickly.

How so?

I have been a tester for almost 10 years. My experience might be in a different world, but there are still many things that are common to software testing regardless of the type of application you are testing.  Software for example. And programmers.  You see, both desktop and web developers are humans. Desktop and web developers also both use this thing called code to help them accomplish things. Programmers and programming languages share things in common with each other that go beyond the boundaries of the kind of application you are building. The ways in which we create and interact with data structures and control flows are constrained by the way coding languages work.  The way that we create and maintain code are constrained by the ways that humans think and work.

I was actually surprised at how much of my skill and experience as a tester was easily transferable to an entirely new domain.  There are many things about being a good tester that go far beyond the tool set being used and the type of application being used.  There is a real sense in which being a good tester is not about the tools you use.

Being a good tester is about the tools

But that isn’t the full story. In the last couple of weeks I have not only been learning a new application, I have also been learning many new tools.  Some of them are internal tools that help make things easier for me.  Some of them are very well known tools that I didn’t need to use before.  Developer tools in Chrome for example. Or JMeter and Webdriver and Postman and the list goes on. As I get a better understanding of the application I get the urge to dig deeper and learn more and for that I turn to tools. Could I quickly get up to speed and contribute without learning new tools? Yes! Am I content with staying there?  No! I want to be a testing Batman. A skilled testers can be effective independent of the tools used, but a skilled tester will use tools to get even better.

I’m re-building my toolbelt now.  I’m like an electrician who has moved from doing residential work to commercial buildings. The underlying skill of knowing and understanding how electricity and wiring works stays the same, but there are some things that need to be done differently.  If the electrician refuses to learn and use new tools, she will be less effective than she could be, no matter how skilled of an electrician she is.

I still use the same underlying skills, but there are some new tools that I need to help me get more effective at my job. What tools do you use? Do you have any recommendations for a tester new to web apps? What tools should I add to my toolbelt?



We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.

I’ve been learning a new product and so I have been in exploration mode. I’ve spent the last week poking around and learning many new things about the product. Making maps of it to get a better understand. Trying things out in different ways. Setting up little experiments. Asking questions. Even on occasion reading some documentation.

The team I am a part of is responsible for one particular area of the product, and so I have kept circling back to that area. Every time I come back I find that during my wanderings in other parts of the product I have learned something new. I’ve gained insights that give some new light on the area of interest.  At first glance it seems like this part of the product is pretty simple, but the more I learn and understand how the pieces fit together, the more I see that simplicity is hard to do. It takes a lot of work to make something look and feel simple. There is a lot of hidden complexity. As I understand that complexity better, I get better at understanding the risks.  I get better at seeing where the problems might be and chasing down odd behaviors.

It can be easy to think that you know something, but it is only as you continue to return to it time after time that you really begin to understand it.  Sometimes testers can feel like they are spinning in circles, returning to the same area over and over, but every time you come back to something you are different.  You’ve learned new things and had new experiences. There are biases you have (and you would do well to learn and understand them), but it is still a new you. Use the things you’ve learned. Make the old familiar areas of the product come alive by bringing in your new perspective. Perhaps T.S. Eliot was right. It is only when we return to something that we will be able to know  it for the first time.


New Job!

New Job!

This isn’t the usual kind of post on this blog, but I want to take a couple of minutes to talk about changes in my life. I started my testing career at Ansys right out of university and it has been a good 9.5 years.  I have learned a lot a grown in many ways, but the time has finally come to try something new.  I started today at D2L. It is a totally different area and will present many new challenges for me as I work with a different technology stack and in a very different area. I’m looking forward to learning and growing in many ways and I’m sure I will continue to share thoughts and experiences as I learn new things and try to work though new learning challenges.