Hello? Hello?

Hello? Hello?

I’m sure we are all busy.  It seems to be a hallmark of our culture.  I’ve certainly been busy over the last couple of months! In addition to my regular activities with work, family and church, I’ve been working on a course about scripting for testers.  This course will be done in partnership with LinkedIn Learning and will be published on their platform.

Needless to say, getting this course together has been a lot of work, but I am very excited to be heading out to California to record it!  I’m also excited for the course itself.  I’ve talked a lot on this blog about the idea that testing is a technical job and that as testers we need to think about how we use technology and automation to do more than just run regression tests. This course is meant to help testers get started on the path to being technical testers.  Maybe you’ve been in a more traditional or manual testing role and need to learn some basic skills, but don’t know where to start.  The idea of this course is to give that basic grounding in helping you get started with that.

There is so much to do and learn that sometimes we just need some ideas and examples that show how others use these tools.  My hope with this course it to help testers realize what a powerful and useful tool scripting is and how we can become better testers as we use it.  I also hope to show that we don’t need to be scared of it.  There is a lot we can do with a limited amount of information.  The best way to learn is to do. Take some time today to learn and do something new!

When I started this blog, one of the reasons I wanted to do this was knowing that the best way to learn is to teach.  I have certainly found that to be true in putting together this course.  I has been a huge learning experience for me. I’m very thankful and excited to have had this opportunity and I’m sure I will be sharing some of my learning and experiences on this blog over the next couple of months. Stay tuned!

 

 

Advertisements

Expect Crashes

Expect Crashes

Dying in a car crash is one of the leading causes of death for those of us living in developed countries. It’s not surprising then that we spend a lot of time as a society trying to mitigate that risk. We implement things like speed limits and safety standards for vehicles and education programs for drivers to try and prevent crashes. Prevention is the best cure, and all that.

We don’t stop there though do we? We know that despite our best efforts, crashes are still going to happen and so we put in place things like seatbelts and airbags and safety rails. We also have tools in place to help us deal with the problems that arise after the crash.  We have ambulances, and paramedics and laws about moving over for emergency vehicles. We don’t just try to prevent crashes, we also try to mitigate the effect of crashes.

What I’ve been describing here is an approach to injury prevention that can be summarized with the Haddon Matrix.  We have a pre-event phase, a during event phase, and post-event phase and we have strategies to help mitigate the impact in each phase.

I like to take ideas from other fields and think about how they relate to testing, so let’s do that for a minute here.  What phase do we spend most of our time in as testers?

Traditionally it has been the pre-event phase.  We are trying to find the bugs before they ever make it to the customer.  We are trying to find the crashes and errors ahead of time.  We work primarily in the prevention realm. But shouldn’t we consider that despite our best efforts, some crashes will still happen? We will have issues that customers face, so what is our strategy at that point? What is our during and post event strategies for bugs that get exposed to customers?

Think about filling out something like the table below. I simplified the Haddon matrix by taking out environmental factors, but just the process of going through this could be a helpful way to see where you can invest as a company.  The ability to prevent problems is important and helpful, but as applications grow in size we will never be able to do that completely.  We need to have strategies in place to deal with what happens when things go south.  What are your strategies?

Phase Human Factors System Factors
Pre-Event
  • Testing
  • Dogfooding
  • Code Review
  • Feature Flags
  • Build Processes
  • Realistic Test Environments
During-Event
  •  Dynamic response to failures
  • Ability to debug in production
  • Immediate access to live production data
  • Logging & Alerts
  • Automatic fail safes
  • Self-healing capabilities
  • Flighting and rollback ability
Post-Event
  • Root cause analysis
  • Customer follow up
  • Quick build pipelines
  • Ability to get fixes to production in a timely manner

Selenium or TestCafe?

Selenium or TestCafe?

I’ve been looking into automation tools.  I was messing around with Selenium a bit and made some scripts to help us do some stuff more quickly.  Before investing too much in a particular tool though, I wanted to look around a bit at what else might be out there.  I came across TestCafe and heard some good things about it and so thought I’d give it a try.  I’m new to both tools and so I thought as a newbie why not compare the two? So here goes:

Looks

We need start with the important thing first: colors.  More specifically are there pretty things and do the colors make me happy?  Selenium/webdriver? Not really. TestCafe? Well, it has enough good looks to make a beauty queen jealous.

Joking aside, one of the things I like about TestCafe is that it gives me some info about what it is doing during the run with a status bar at the bottom.  This kind of gives a peek into the mind of the system and makes debugging easier.  TestCafe also give nice debug output in the console for failed tests

Winner: TestCafe

Installation and setup. 

What about setup?  How hard is it to get started?  For TestCafe, all I had to do was

npm install -g testcafe

and about 30 seconds later it was done.  My first test was running about 15 minutes later.  Selenium wasn’t too bad either, but I did have to install webdriver for a few browsers as well as pull in some selenium packages into python.  Since I was driving things through python for my test the selenium part was pretty easy:

python -m pip install selenium

but there was still some added complexity with getting webdriver to work for all browsers and setting up the first test was a little more complex as well.  All in all, it probably took about an hour to get my first test running with Selenium/webdriver.

Winner: TestCafe

Cross Browser

The whole purpose of this is to be able to more easily check thing across different browsers right?  So how easy is that to do?  With both tools I first ran the test in Chrome, because well, that is the browser all sane people use right? Once I had my test working in Chrome I tried running in other browsers. In both cases the test didn’t work in any other browsers. It took me a while with the Selenium test to work through the issues (mostly involving issues with timing and waits).

With TestCafe, I couldn’t get the test to work on any other browser. As far as I can figure out, it has to do with Javascipt errors related to using polymer components on our login page.  TestCafe has an option to skip Javascript errors and this let me get a little bit further, but I was still unable to complete the test.  My suspicion is that we are doing something a little off in the timing of loading our polymer web components. There does seem to be a fix coming in TestCafe that will let me work around this,  but at the end of the day, I was unable to get TestCafe to work on our app with any other browser than Chrome. I poked at it for an hour or two, and I’m sure there is a solution for it, but at this point I have not been able to test in other browsers

Winner: Selenium/Webdriver

Waits

Much like renewing your drivers license, the most annoying part of using Selenium is dealing with waiting.  The trick is to get it to make sure that what I want is there without letting it have a nap every few seconds. I probably spent more time on trying to figure out waits than anything else (and to be honest in the script I made there were still some explicit sleeps() in place).  With TestCafe, this just worked. They have implicit waits build into the async calls, and it worked out of the box. This is actually the primary reason I was able to get the first test working so quickly.  I didn’t have worry about waits.

Winner: TestCafe

Language

Selenium has a lot of support in various languages.  For me that meant I could use python and feel that joyful feeling that comes from coding in python.  It also means that you can write your tests in the same language as your app or your favorite language (which of course, is python).

TestCafe uses, um, Javascript. I don’t like writing code in Javascript.  Probably mostly because I haven’t done it much and don’t fully understand how things work, but there you have it.  On the plus side, it does give you a lot of power and flexibility on being able to hook into your app in some interesting ways.

Winner: Selenium

Maturity

Webdriver and selenium have been around for a long time. They have grey hair. They might even have considered dying it.  TestCafe, however, is fresh out of college and ready to take on the world.  Full of wide eyed wonder and optimism, it’s exciting to use and has all the energy of youthful optimism.

With age and maturity comes experience and webdriver has that in droves.  When you google around for answers to questions and problems you have, you find answers.  Lots of answers.  Answers of people who have been through what you’re going through and who have the scars to prove it.

TestCafe has seen the problems of webdriver and with all the enthusiasm of youth has decided to fix them out of the box.  This is really nice (see the waits section above), but when you do run into problems it’s a lot harder to find answers.  There just aren’t as many example of people hitting the problems you have and so you rely much more on the documentation (which is really good by the way).  Unfortunately documentation and well designed code still can’t anticipate every problem you will run into in the wild and having a large community around a tool is really helpful for figuring things out.

Winner: Selenium

Overall

I was really impressed with TestCafe and I really want it to be the winner, but unfortunately if I can’t figure out the cross browser issues I’m having, I won’t be able to. Maybe (hopefully) those are just some weird issues we have in our app and for most people this won’t be a problem.  I think that if you don’t see the weird issues I’m seeing on non-Chrome browsers, the overall winner would be TestCafe.

Winner: Selenium (For me, for now), TestCafe (If it works cross-browser on your app)

Using Scripting to Explore

Using Scripting to Explore

I have often talked about the perils of test automation. I think it is a seductive thing and we fall into those seductive arms far to easily, only to get hurt in the end. There is also a lot of talk about whether testers should be able to code or not. I think you should give it serious consideration, but so often the “testers being able to code” conversation centers on test automation. Here’s the problem. Writing good test automation especially at the UI level is one of the hardest types of coding to do well. I’ve been learning Selenium lately as I’m looking at automation some things for my team.  Getting that first test to work wasn’t that hard. Until.

Until I went in and tried to remove all those manual sleeps I had in there and until I tried to clean it up so that it wouldn’t fail later and until I started to think about how to make it work cross-browser.  There is so much to more to writing good automation than just getting some selenium commands to work. After working on it for a while, I realized that it probably isn’t worth it for this project. It is pretty small and we can get through the regression tests pretty quickly by hand.  However, that doesn’t mean there is no place for automation.  Why does our automation have to be fully and completely hands off? There are some regression tests that we do that could be done much more quickly with the help of some automation.  There are also some areas we know we have niggly bugs that we should be able to pin down with the help of some automation.

You see, I think one of the best places for testers to use automation is for helping with exploration. There are so many questions that take a long time or aren’t possible to get the answer too without the help of some scripting. I think we do ourselves a disservice as testers when we only think of our scripting skills as applying to regression test automation. In light of that, I think it is time for more testers to think about how to leverage programming skills to get better at exploring.

I’ve been working on a course called scripting for testers that will help demonstrate the value of this and show some examples of what this could look like. As I have material that makes sense to share I will put it up here as part of a series I will call exploring with code. Keep an eye out for it. I hope it will be helpful!

 

Start With What You Know

Start With What You Know

The blank screen stared back at me.  Blank screen are intimidating. Where do you start?  What do you do first?  What if you go the wrong way? Sometimes getting started is the hardest part.

I recently moved out of desktop testing to testing websites. I needed some automation, but – I’ll be honest – I had no idea where to start.  I’d heard of things like selenium and webdriver and many tool vendors, but which tools were going to do what I needed? I was only just started to get a good feeling for how the product itself worked, how was I supposed to pick a tool to use?

Learn to do by doing and start with what you know.  Sometimes simple proverbs can give you the answers.

I know python quite well so I started there. Sure enough, there are some great easy to use libraries that let me run selenium and webdriver. Are these going to be what I use in the long run?  I don’t know. But I do know that I have been learning a ton as I’ve started to ramp up on the python libraries for selenium. I’ve been learning a lot about Selenium, but I have also been learning a lot about how webpages work and how the DOM gets populated and how iFrames work and about javascript and html elements, and the list goes on and on.

If I had tried to make a plan for all the things I needed to learn, I would have missed so much.  By starting with the little that I did know and finding something that intersected with it, I was able to start creating a test. As I did that I ran into issues and started learning about the things I needed to know in order to solve those issues.

Do you remember playing Age of Empires?

AgeOrEmpriesStarting

You would start out with most of the screen dark and only one little spot of light and as you explored the game more and more would be come light.  This is where I am.  A little spot of light and a whole huge map to explore. There is only one way to find out what is on the map.

Time to explore!

 

 

How Should Testers use AI?

How Should Testers use AI?

As a tester I am excited about the possibilities of AI and machine learning.1
I hope that there will be many ways to leverage this technology to level up on our testing powers. As with any tool though, we need to recognize that powerful and magical are two different things. As hardware and machine learning capabilities get more powerful we need to leverage them, but just using them won’t magically produce good results.

If you were smashing up some concrete a jackhammer would be much better than a sledgehammer right?  What if you started using the jackhammer like it was a sledgehammer? Swinging it over your head and hitting the concrete with it. You have a better tool. Are you going to more effective? When we approach new tools we need to use them effectively.  We can’t just use them as if they were the same as the old tools we had.

I recently read an article about how testers could use AI to improve the stability of UI tests. One idea presented was that UI tests could be more robust if we used AI to figure out what a button was.  By using machine learning and image recognition we can figure out if a button is the submit button or the back button even if the way it looks has changed.

Yawn.

Ok, that was a bit rude, but the reality is that if all AI is going to do is make UI automation more robust, I have better things to do than learn how to use it in my testing. There are a lot of easier ways to make UI automation more robust (not least of which is not doing so much of it).  We don’t need AI to help us out here as much as we just need a bit more common sense.  Throwing more technology at a problem that is, at it’s heart, a problem of misunderstanding how to effectively use UI automation won’t help. It will just allow people to add more UI automation without thinking about some other effective ways to test their apps.  To return to the jackhammer metaphor, if someone is smashing up the wrong part of the sidewalk, giving them a better tool won’t help with what matters. They will just smash the wrong thing more effectively.

If you want to stand out from the crowd you’ll need to dig a little deeper. You’ll need to find some uses for AI that are a little more interesting. I’ve just started poking my nose into some of the AI libraries and trying them out, so these are just some brainstorm style ideas I have at this point.  I want to think about this ahead of time and see if it would be something that is worth further investigation.  I’m always on the lookout for new tools to add to my testing toolbox – could machine learning be one of them?

Idea for using AI in my testing

Data Analytics.

The developers need to implement that for me, you might object.  This is true in some areas, but think about it a little longer. What data do most testers have access to? What happens when you run  your test automation? Does it generate any log files? Could it?  Can you cross correlate data in those files with some simple system information? We generate a lot of data during test sessions. Can we get some information out of that data? Do you need the developers to implement advanced telemetry in your app to do this? I think there are a lot of ways machine learning could be used to generate insights into the behavior of your application that do not involve advanced telemetry.

Writing Bug reports

We all know that bugs like company. Where there is one bug there are often more.  What about using machine learning to parse through the bug reports and see if there are patterns to be discerned? Where do the bugs cluster? What kinds of steps/actions frequently show up in bug reports? What phrases are common to bug reports? We have bots that can write a realistic news articles, why shouldn’t we use them to write plausible bug reports?  Will those reports show actual defects? Probably not, but they could generate some great test ideas and ares of focus. One of the biggest challenges in testing is reducing the scope of infinite possibilities in a way that is valuable.  Could we use AI to help us with this?

Writing new scripts

Our current ideas around regression testing involve writing scripts that do the same darn thing every time they are run.  There are reasons we do this, but there are also a lot of problems with this approach. What if we gave a machine learning algorithm the pass/fail data for our tests and started to figure out which ones are likely to find bugs?  What if we took it further and let the AI suggest some new tests?  I think there are a lot of possibilities for automating the finding of regressions in a much more efficient way.

Conclusion

In looking at these things, I think there is a lot of potential for machine learning to help with testing in the future.  However, it seems like most of these things are still too advanced for the individual tester to do.  We will need better machine learning tools, before we will see payoff on investments like this. For now, I intend to learn a bit more about machine learning, but I don’t think it is going to transform too much in my testing in the short term.  I guess we will see again in a year or two where the tools are at.

I really do hope that we see development of creative machine learning tools for testing that break into entirely new and innovative areas. So far much of what I see people talking about using ML for in testing is to do the same things we did in the past, but better – because AI. I’m sure there are some gains to be had in those areas, but I really think we will see AI become a powerful testing tool when we start to use it to do new things that we can’t do at all with our current set of tools.

What do you think? Where is AI going?  How will testers use it in the future?

Footnotes

1. Note that I am being sloppy with my terms in this article and using Machine Learning and AI as interchangeable.

Do Tools Matter?

Do Tools Matter?

Life is different.

I’ve moved from the world of desktop software into the world of web software.  I was working on a team with thousands of integration tests – now I’m on a team that has very few. I was one of the experts on the team – now I’m asking all the newbie questions.  Things have changed a lot. In a new environment there are many new things to learn and it raises a question: is being a good tester about the tools you use, or not?

Yes.

Being a good tester is not about the tools

I’ve noticed that I was able to very quickly get up to speed on the product and how it worked.  I was able to file several bugs in my first week.  I was able to feel comfortable talking about the different areas of the product. I was able to understand much of what the team was talking about very quickly.

How so?

I have been a tester for almost 10 years. My experience might be in a different world, but there are still many things that are common to software testing regardless of the type of application you are testing.  Software for example. And programmers.  You see, both desktop and web developers are humans. Desktop and web developers also both use this thing called code to help them accomplish things. Programmers and programming languages share things in common with each other that go beyond the boundaries of the kind of application you are building. The ways in which we create and interact with data structures and control flows are constrained by the way coding languages work.  The way that we create and maintain code are constrained by the ways that humans think and work.

I was actually surprised at how much of my skill and experience as a tester was easily transferable to an entirely new domain.  There are many things about being a good tester that go far beyond the tool set being used and the type of application being used.  There is a real sense in which being a good tester is not about the tools you use.

Being a good tester is about the tools

But that isn’t the full story. In the last couple of weeks I have not only been learning a new application, I have also been learning many new tools.  Some of them are internal tools that help make things easier for me.  Some of them are very well known tools that I didn’t need to use before.  Developer tools in Chrome for example. Or JMeter and Webdriver and Postman and the list goes on. As I get a better understanding of the application I get the urge to dig deeper and learn more and for that I turn to tools. Could I quickly get up to speed and contribute without learning new tools? Yes! Am I content with staying there?  No! I want to be a testing Batman. A skilled testers can be effective independent of the tools used, but a skilled tester will use tools to get even better.

I’m re-building my toolbelt now.  I’m like an electrician who has moved from doing residential work to commercial buildings. The underlying skill of knowing and understanding how electricity and wiring works stays the same, but there are some things that need to be done differently.  If the electrician refuses to learn and use new tools, she will be less effective than she could be, no matter how skilled of an electrician she is.

I still use the same underlying skills, but there are some new tools that I need to help me get more effective at my job. What tools do you use? Do you have any recommendations for a tester new to web apps? What tools should I add to my toolbelt?