Expect Crashes

Expect Crashes

Dying in a car crash is one of the leading causes of death for those of us living in developed countries. It’s not surprising then that we spend a lot of time as a society trying to mitigate that risk. We implement things like speed limits and safety standards for vehicles and education programs for drivers to try and prevent crashes. Prevention is the best cure, and all that.

We don’t stop there though do we? We know that despite our best efforts, crashes are still going to happen and so we put in place things like seatbelts and airbags and safety rails. We also have tools in place to help us deal with the problems that arise after the crash.  We have ambulances, and paramedics and laws about moving over for emergency vehicles. We don’t just try to prevent crashes, we also try to mitigate the effect of crashes.

What I’ve been describing here is an approach to injury prevention that can be summarized with the Haddon Matrix.  We have a pre-event phase, a during event phase, and post-event phase and we have strategies to help mitigate the impact in each phase.

I like to take ideas from other fields and think about how they relate to testing, so let’s do that for a minute here.  What phase do we spend most of our time in as testers?

Traditionally it has been the pre-event phase.  We are trying to find the bugs before they ever make it to the customer.  We are trying to find the crashes and errors ahead of time.  We work primarily in the prevention realm. But shouldn’t we consider that despite our best efforts, some crashes will still happen? We will have issues that customers face, so what is our strategy at that point? What is our during and post event strategies for bugs that get exposed to customers?

Think about filling out something like the table below. I simplified the Haddon matrix by taking out environmental factors, but just the process of going through this could be a helpful way to see where you can invest as a company.  The ability to prevent problems is important and helpful, but as applications grow in size we will never be able to do that completely.  We need to have strategies in place to deal with what happens when things go south.  What are your strategies?

Phase Human Factors System Factors
Pre-Event
  • Testing
  • Dogfooding
  • Code Review
  • Feature Flags
  • Build Processes
  • Realistic Test Environments
During-Event
  •  Dynamic response to failures
  • Ability to debug in production
  • Immediate access to live production data
  • Logging & Alerts
  • Automatic fail safes
  • Self-healing capabilities
  • Flighting and rollback ability
Post-Event
  • Root cause analysis
  • Customer follow up
  • Quick build pipelines
  • Ability to get fixes to production in a timely manner
Advertisements

Selenium or TestCafe?

Selenium or TestCafe?

I’ve been looking into automation tools.  I was messing around with Selenium a bit and made some scripts to help us do some stuff more quickly.  Before investing too much in a particular tool though, I wanted to look around a bit at what else might be out there.  I came across TestCafe and heard some good things about it and so thought I’d give it a try.  I’m new to both tools and so I thought as a newbie why not compare the two? So here goes:

Looks

We need start with the important thing first: colors.  More specifically are there pretty things and do the colors make me happy?  Selenium/webdriver? Not really. TestCafe? Well, it has enough good looks to make a beauty queen jealous.

Joking aside, one of the things I like about TestCafe is that it gives me some info about what it is doing during the run with a status bar at the bottom.  This kind of gives a peek into the mind of the system and makes debugging easier.  TestCafe also give nice debug output in the console for failed tests

Winner: TestCafe

Installation and setup. 

What about setup?  How hard is it to get started?  For TestCafe, all I had to do was

npm install -g testcafe

and about 30 seconds later it was done.  My first test was running about 15 minutes later.  Selenium wasn’t too bad either, but I did have to install webdriver for a few browsers as well as pull in some selenium packages into python.  Since I was driving things through python for my test the selenium part was pretty easy:

python -m pip install selenium

but there was still some added complexity with getting webdriver to work for all browsers and setting up the first test was a little more complex as well.  All in all, it probably took about an hour to get my first test running with Selenium/webdriver.

Winner: TestCafe

Cross Browser

The whole purpose of this is to be able to more easily check thing across different browsers right?  So how easy is that to do?  With both tools I first ran the test in Chrome, because well, that is the browser all sane people use right? Once I had my test working in Chrome I tried running in other browsers. In both cases the test didn’t work in any other browsers. It took me a while with the Selenium test to work through the issues (mostly involving issues with timing and waits).

With TestCafe, I couldn’t get the test to work on any other browser. As far as I can figure out, it has to do with Javascipt errors related to using polymer components on our login page.  TestCafe has an option to skip Javascript errors and this let me get a little bit further, but I was still unable to complete the test.  My suspicion is that we are doing something a little off in the timing of loading our polymer web components. There does seem to be a fix coming in TestCafe that will let me work around this,  but at the end of the day, I was unable to get TestCafe to work on our app with any other browser than Chrome. I poked at it for an hour or two, and I’m sure there is a solution for it, but at this point I have not been able to test in other browsers

Winner: Selenium/Webdriver

Waits

Much like renewing your drivers license, the most annoying part of using Selenium is dealing with waiting.  The trick is to get it to make sure that what I want is there without letting it have a nap every few seconds. I probably spent more time on trying to figure out waits than anything else (and to be honest in the script I made there were still some explicit sleeps() in place).  With TestCafe, this just worked. They have implicit waits build into the async calls, and it worked out of the box. This is actually the primary reason I was able to get the first test working so quickly.  I didn’t have worry about waits.

Winner: TestCafe

Language

Selenium has a lot of support in various languages.  For me that meant I could use python and feel that joyful feeling that comes from coding in python.  It also means that you can write your tests in the same language as your app or your favorite language (which of course, is python).

TestCafe uses, um, Javascript. I don’t like writing code in Javascript.  Probably mostly because I haven’t done it much and don’t fully understand how things work, but there you have it.  On the plus side, it does give you a lot of power and flexibility on being able to hook into your app in some interesting ways.

Winner: Selenium

Maturity

Webdriver and selenium have been around for a long time. They have grey hair. They might even have considered dying it.  TestCafe, however, is fresh out of college and ready to take on the world.  Full of wide eyed wonder and optimism, it’s exciting to use and has all the energy of youthful optimism.

With age and maturity comes experience and webdriver has that in droves.  When you google around for answers to questions and problems you have, you find answers.  Lots of answers.  Answers of people who have been through what you’re going through and who have the scars to prove it.

TestCafe has seen the problems of webdriver and with all the enthusiasm of youth has decided to fix them out of the box.  This is really nice (see the waits section above), but when you do run into problems it’s a lot harder to find answers.  There just aren’t as many example of people hitting the problems you have and so you rely much more on the documentation (which is really good by the way).  Unfortunately documentation and well designed code still can’t anticipate every problem you will run into in the wild and having a large community around a tool is really helpful for figuring things out.

Winner: Selenium

Overall

I was really impressed with TestCafe and I really want it to be the winner, but unfortunately if I can’t figure out the cross browser issues I’m having, I won’t be able to. Maybe (hopefully) those are just some weird issues we have in our app and for most people this won’t be a problem.  I think that if you don’t see the weird issues I’m seeing on non-Chrome browsers, the overall winner would be TestCafe.

Winner: Selenium (For me, for now), TestCafe (If it works cross-browser on your app)