How Should Testers use AI?

How Should Testers use AI?

As a tester I am excited about the possibilities of AI and machine learning.1
I hope that there will be many ways to leverage this technology to level up on our testing powers. As with any tool though, we need to recognize that powerful and magical are two different things. As hardware and machine learning capabilities get more powerful we need to leverage them, but just using them won’t magically produce good results.

If you were smashing up some concrete a jackhammer would be much better than a sledgehammer right?  What if you started using the jackhammer like it was a sledgehammer? Swinging it over your head and hitting the concrete with it. You have a better tool. Are you going to more effective? When we approach new tools we need to use them effectively.  We can’t just use them as if they were the same as the old tools we had.

I recently read an article about how testers could use AI to improve the stability of UI tests. One idea presented was that UI tests could be more robust if we used AI to figure out what a button was.  By using machine learning and image recognition we can figure out if a button is the submit button or the back button even if the way it looks has changed.

Yawn.

Ok, that was a bit rude, but the reality is that if all AI is going to do is make UI automation more robust, I have better things to do than learn how to use it in my testing. There are a lot of easier ways to make UI automation more robust (not least of which is not doing so much of it).  We don’t need AI to help us out here as much as we just need a bit more common sense.  Throwing more technology at a problem that is, at it’s heart, a problem of misunderstanding how to effectively use UI automation won’t help. It will just allow people to add more UI automation without thinking about some other effective ways to test their apps.  To return to the jackhammer metaphor, if someone is smashing up the wrong part of the sidewalk, giving them a better tool won’t help with what matters. They will just smash the wrong thing more effectively.

If you want to stand out from the crowd you’ll need to dig a little deeper. You’ll need to find some uses for AI that are a little more interesting. I’ve just started poking my nose into some of the AI libraries and trying them out, so these are just some brainstorm style ideas I have at this point.  I want to think about this ahead of time and see if it would be something that is worth further investigation.  I’m always on the lookout for new tools to add to my testing toolbox – could machine learning be one of them?

Idea for using AI in my testing

Data Analytics.

The developers need to implement that for me, you might object.  This is true in some areas, but think about it a little longer. What data do most testers have access to? What happens when you run  your test automation? Does it generate any log files? Could it?  Can you cross correlate data in those files with some simple system information? We generate a lot of data during test sessions. Can we get some information out of that data? Do you need the developers to implement advanced telemetry in your app to do this? I think there are a lot of ways machine learning could be used to generate insights into the behavior of your application that do not involve advanced telemetry.

Writing Bug reports

We all know that bugs like company. Where there is one bug there are often more.  What about using machine learning to parse through the bug reports and see if there are patterns to be discerned? Where do the bugs cluster? What kinds of steps/actions frequently show up in bug reports? What phrases are common to bug reports? We have bots that can write a realistic news articles, why shouldn’t we use them to write plausible bug reports?  Will those reports show actual defects? Probably not, but they could generate some great test ideas and ares of focus. One of the biggest challenges in testing is reducing the scope of infinite possibilities in a way that is valuable.  Could we use AI to help us with this?

Writing new scripts

Our current ideas around regression testing involve writing scripts that do the same darn thing every time they are run.  There are reasons we do this, but there are also a lot of problems with this approach. What if we gave a machine learning algorithm the pass/fail data for our tests and started to figure out which ones are likely to find bugs?  What if we took it further and let the AI suggest some new tests?  I think there are a lot of possibilities for automating the finding of regressions in a much more efficient way.

Conclusion

In looking at these things, I think there is a lot of potential for machine learning to help with testing in the future.  However, it seems like most of these things are still too advanced for the individual tester to do.  We will need better machine learning tools, before we will see payoff on investments like this. For now, I intend to learn a bit more about machine learning, but I don’t think it is going to transform too much in my testing in the short term.  I guess we will see again in a year or two where the tools are at.

I really do hope that we see development of creative machine learning tools for testing that break into entirely new and innovative areas. So far much of what I see people talking about using ML for in testing is to do the same things we did in the past, but better – because AI. I’m sure there are some gains to be had in those areas, but I really think we will see AI become a powerful testing tool when we start to use it to do new things that we can’t do at all with our current set of tools.

What do you think? Where is AI going?  How will testers use it in the future?

Footnotes

1. Note that I am being sloppy with my terms in this article and using Machine Learning and AI as interchangeable.

Advertisements