How Should Testers use AI?

As a tester I am excited about the possibilities of AI and machine learning.1
I hope that there will be many ways to leverage this technology to level up on our testing powers. As with any tool though, we need to recognize that powerful and magical are two different things. As hardware and machine learning capabilities get more powerful we need to leverage them, but just using them won’t magically produce good results.

If you were smashing up some concrete a jackhammer would be much better than a sledgehammer right?  What if you started using the jackhammer like it was a sledgehammer? Swinging it over your head and hitting the concrete with it. You have a better tool. Are you going to more effective? When we approach new tools we need to use them effectively.  We can’t just use them as if they were the same as the old tools we had.

I recently read an article about how testers could use AI to improve the stability of UI tests. One idea presented was that UI tests could be more robust if we used AI to figure out what a button was.  By using machine learning and image recognition we can figure out if a button is the submit button or the back button even if the way it looks has changed.

Yawn.

Ok, that was a bit rude, but the reality is that if all AI is going to do is make UI automation more robust, I have better things to do than learn how to use it in my testing. There are a lot of easier ways to make UI automation more robust (not least of which is not doing so much of it).  We don’t need AI to help us out here as much as we just need a bit more common sense.  Throwing more technology at a problem that is, at it’s heart, a problem of misunderstanding how to effectively use UI automation won’t help. It will just allow people to add more UI automation without thinking about some other effective ways to test their apps.  To return to the jackhammer metaphor, if someone is smashing up the wrong part of the sidewalk, giving them a better tool won’t help with what matters. They will just smash the wrong thing more effectively.

If you want to stand out from the crowd you’ll need to dig a little deeper. You’ll need to find some uses for AI that are a little more interesting. I’ve just started poking my nose into some of the AI libraries and trying them out, so these are just some brainstorm style ideas I have at this point.  I want to think about this ahead of time and see if it would be something that is worth further investigation.  I’m always on the lookout for new tools to add to my testing toolbox – could machine learning be one of them?

Idea for using AI in my testing

Data Analytics.

The developers need to implement that for me, you might object.  This is true in some areas, but think about it a little longer. What data do most testers have access to? What happens when you run  your test automation? Does it generate any log files? Could it?  Can you cross correlate data in those files with some simple system information? We generate a lot of data during test sessions. Can we get some information out of that data? Do you need the developers to implement advanced telemetry in your app to do this? I think there are a lot of ways machine learning could be used to generate insights into the behavior of your application that do not involve advanced telemetry.

Writing Bug reports

We all know that bugs like company. Where there is one bug there are often more.  What about using machine learning to parse through the bug reports and see if there are patterns to be discerned? Where do the bugs cluster? What kinds of steps/actions frequently show up in bug reports? What phrases are common to bug reports? We have bots that can write a realistic news articles, why shouldn’t we use them to write plausible bug reports?  Will those reports show actual defects? Probably not, but they could generate some great test ideas and ares of focus. One of the biggest challenges in testing is reducing the scope of infinite possibilities in a way that is valuable.  Could we use AI to help us with this?

Writing new scripts

Our current ideas around regression testing involve writing scripts that do the same darn thing every time they are run.  There are reasons we do this, but there are also a lot of problems with this approach. What if we gave a machine learning algorithm the pass/fail data for our tests and started to figure out which ones are likely to find bugs?  What if we took it further and let the AI suggest some new tests?  I think there are a lot of possibilities for automating the finding of regressions in a much more efficient way.

Conclusion

In looking at these things, I think there is a lot of potential for machine learning to help with testing in the future.  However, it seems like most of these things are still too advanced for the individual tester to do.  We will need better machine learning tools, before we will see payoff on investments like this. For now, I intend to learn a bit more about machine learning, but I don’t think it is going to transform too much in my testing in the short term.  I guess we will see again in a year or two where the tools are at.

I really do hope that we see development of creative machine learning tools for testing that break into entirely new and innovative areas. So far much of what I see people talking about using ML for in testing is to do the same things we did in the past, but better – because AI. I’m sure there are some gains to be had in those areas, but I really think we will see AI become a powerful testing tool when we start to use it to do new things that we can’t do at all with our current set of tools.

What do you think? Where is AI going?  How will testers use it in the future?

Footnotes

1. Note that I am being sloppy with my terms in this article and using Machine Learning and AI as interchangeable.

6 Comments

  1. roesslerj says:

    Hi!

    Just asking for clarification: What are you referring to when talking about regression testing scripts? Can you give some concrete examples, also of what the AI would possibly generate?

    Thanks,
    Jeremy

    Like

    1. offbeattesting says:

      Hey Jeremy,

      Good questions. Concrete example of a regression testing script could be a selenium script or a jmeter script etc. What could the AI generate? It’s all pretty theoretical in this article 🙂 but one could imagine a few possibilities. For example it could give you suggestions when writing new scripts – “Test with similar concepts to this one you are creating, are more likely to reveal bugs if they include these kinds of commands (which you don’t currently have in this test).” Or another example: “80% of tests with this sequence of commands have demonstrated flaky behavior”

      In terms of actually generating new scripts you could have the AI use your existing test scripts (along with pass/fail data) to predict some new scripts that might be likely to find issues. Or if you were willing to invest the time in training it, you could let the AI generate new scripts and see how they pass/fail over time (If you did this you would start to get a lot of scripts and so you would need a good strategy for removing scripts as well).

      Really there are a lot of ways to think about it that go beyond just improving robustness of current regression runs. As to how feasible/likely these are – I don’t know. I think using ML will have to get a lot easier before when can really try out some of these things. It would be interesting to me though to see some companies thinking about smart ways to do this.

      I guess my thesis is give me AI that helps me explore and just AI that makes it easier to do what I already do 🙂

      Like

      1. roesslerj says:

        Hello Dave,

        I was just curious, as you statement “It will just allow people to add more UI automation without thinking about some other effective ways to test their apps.” implied you are not fond of UI automation…

        Actually, we are doing what you suggest:
        1. We are creating UI automation that is robust without AI, by using a different approach (we call it difference testing). And
        2. We are using AI to generate new UI automation “scripts”. Although it doesn’t try to generate scripts that will find errors (as we don’t have that information), but scripts that increase coverage.

        The tools is currently only implemented for Java Swing, but we are working on that. If you find the time, we’d love to get your feedback: http://retest.ai.

        Cheers,
        Jeremy

        Like

      2. offbeattesting says:

        Yeah, my concern with UI automation is that we try to use it to do thing that can be done more effectively in other ways (eg. through unit tests). Even if we Add in ML we won’t be able to do certain things as well as we can with other tools. We need to be thinking about how to do things different, not do the same thing other tools can already do.

        Thanks for sharing your work! I’m just starting to poke into the AI world and I’m very much interested to see where things are at with testing tools for it 🙂 I’ll have to checkout retest and see how it works!

        Like

Leave a Comment