Do You Even Love Me?

Do You Even Love Me?

Have you even had a child put on their grumpy face, cross their arms and say something along the lines of “you don’t even love me”? Maybe you said no to them having more candy or insisted they need to do their homework before they can play video games or any of the millions of things that might upset a child.

The reality of course is that you do love them.  In fact, it is precisely because you love them that you are try to keep them healthy and make sure they are learning the things they need to know to be successful in life. They want something now, but you are looking at the bigger picture and you know that there is something better to be gained by giving up the immediate want.

Sometimes being customer focused can be like this.

Don’t get me wrong, I think the customer is ultimately the one who dictates the value of your software. If they don’t use it and find it valuable you aren’t really making good quality software, regardless of how well you’ve met the specifications. But sometimes customers ask for things and for you to be really customer focused you have to say no. Much like the parent who sees the bigger picture, you know things that your customers don’t.  You know the cyclomatic complexity of your code.  You know the state of your build system.  You know the risks that are there due to lack of test coverage.

Your customers don’t really care about that.  They just want the candy now, but you know what that candy will do to the health of your code. You know that you need to take that code to the gym and get a personal trainer to yell at it for a while first. Or maybe it’s even worse than that.  Maybe you need to call an ambulance and perform an emergency procedure.

Ok I’ll stop with the metaphors now, but the reality is that sometimes investing in underlying code quality issues is caring for the customer.  Sometimes slowing down a bit to clean up your processes is required to allow you move more quickly in the future.

There is a balance here of course.  The point still is to be customer focused and so you need to consider how healthy your code needs to be to meet those needs. The goal isn’t have code with a six pack, rippling abs and  a pearly white smile (sorry the metaphor is too fun – I had to come back to it).  The goal is to have code that is healthy enough to consistently deliver customer value.

As with your health, it’s a lot easier to do this if you make it a part of your regular routine.  It is a lot easier to stay healthy than it is to get healthy. Keep your code fit. Focus on the customer and make sure your code is able to help you do that. And the next time your customer asks ‘do you even love me?’ you can assure them that you do.  You are doing everything you can to have your code be around to help them out for as long as they need it. And if they’re patient, they might even get a lollipop.

Advertisements

Success and Failure

Success and Failure

I hate failure.  And not just in myself.  I don’t like watching others fail. It’s really hard for me to watch someone doing something ‘the wrong way’ without correcting them. As a parent I know I have to let my kids fail sometimes so that they can learn lessons I could never otherwise teach them. My son has been learning to read this year and as I sit with him and listen to him struggle with sounding out some simple words, I want to just read it for him. I want to take over and read it for him – it’s so much faster anyways. But I know that if I don’t let him struggle and fail I’ll actually be slowing down the learning process.

Work isn’t exempt from this.

How often have you jumped in to show someone what they were doing wrong or just taken over and done it for them? Or how many times have you stopped someone from making a mistake?  Sometimes it is a necessary thing to do, but have you ever short-circuited the learning process? I think that the idea of failure as something that leads to success is worth embracing.

If we leave it there though, we are in danger of missing an important point. Important lessons come from failure, but we need to set people up for success.

Now what gives on that? Have I given up on Aristolean logic? We need failure and success?

Yup.

You see to learn from failure, you need to be set up for success.  Let’s return to my son learning to read. He’s been setup for success.  He’s been taught what sounds letter usually make in combination with each other. He’s been taught some basic rules of reading and he gets feedback on what he is great at, as well as feedback on how to correctly say words he has messed up.  He has my finger under the word guiding him.  He has a teacher and parents that help him with this.  So do I let him fail? Yes of course, but only in a way that I think is setting him up for success.

I don’t want to see failure for failures sake. The goal is learning not failure and so the environment needs to be conducive to learning.  Learning from failure.

This blog is about testing.  Hopefully by now you have been able to draw a few conclusions on your own that can help you in the way you work with your teammates, but let me pull out a few of my observations as well.

How can you as a tester set up others for success? One time I was riding with a friend and we were coming up to a stop sign. Based on how fast we were going, I was pretty sure he didn’t see the stop sign. I quickly looked both ways and there were no cars coming on the cross road, so I waited. Then just as he was entering the intersection, I casually said “That’s a stop sign eh” (yes I am a Canadian). The look of panic on his face as he realized what he had just done was priceless, but why tell this story here? I want us to think for a minute about the key factor in there – I looked both ways.  I wanted to let him learn a lesson about paying attention that he wouldn’t soon forget, but what if there had been a car coming down the cross road? If I had let him smash up his car would I have been setting him up for success?  No of course not! The whole point of letting him run the stop sign was so that he wouldn’t do again sometime when it would be more dangerous.

Setting someone up for success means letting them fail in safe ways.  Letting a serious bug go live so that a developer can ‘learn to write better unit tests’ is not setting someone up for success. Working with your team to help them do more testing and putting together a transition plan that moves away from you being a safety net is setting them up for success.  There will be failures along the way of course, but you need to be doing everything you can to make sure everyone has the tools they need to use those failures as learning experiences.

Some other examples of setting yourself up for success in failure include things like shortening your release cycle so that you can better respond to failure.  A bug found in the wild? Being able to quickly respond gives you success in the failure.  Or another example is having instrumentation in place that helps you understand your customer’s pain points.  You could think of those pain points as failures in your design or code, but if you have set yourself up for success you can respond to them and learn from them. These are just a few examples, and I’m sure that if you think about it, you can come up with many other ways to succeed in failure.

So don’t be afraid of failure, focus instead on setting yourself and your team up to learn and grow from failures.

Shortcut!

Shortcut!

During World War II, the CIA wrote a manual called the Simple Sabotage Field Manual. This manual presented ways that those in German occupied territories could sabotage the Nazi oppressors. There is a section in the document about “General Interference with Organizations and Production.”  The sabotage methods that are shared in this manual are fascinating, not least because I think we see so many instances of these happening in companies today. This is a document outlining the ideas of people who sat down and deliberately thought about how to make things less efficient.  We could do well to learn from it.

Let’s look at the first one.

Never permit short-cuts to be taken in order to expedite decisions.

Sound familiar?

“We can’t release this until we have all the test cases signed off.”

Approvals need to go through your manger who passes to her manager who passes it to the director, who make a decisions and passes it back down the line.

The developer can’t get started until the designs come in.

The tester has a huge backlog of items to get through before we can ship.

Having processes in place is often a good thing, especially as a company gets bigger, but we need to allow for flexibility in the processes.  A process can’t anticipate everything up front, so sometimes producing value requires shortcuts.

So can we rephrase that? “Always permit shortcuts to be taken in order to expedite decisions?”

No, that defeats the entire purpose of having processes in place at all.  If we always permit shortcuts, the shortcuts themselves are the process. How about this?

“Permit shortcuts to be taken in order to expedite decisions”

I put that in a quote, so you know it must be good right? We need to be ok with people short cutting the process sometimes.  If this freaks you out, perhaps your team needs to work on trust.  It is true that there are many times when it would not be helpful to take a shortcut.  For example, if you have legal auditing standards you need to meet, or if the shortcut would add a lot of risk without much benefit. But there are also time when it would be good to take a shortcut.

In order to be comfortable with people taking shortcuts you need at least two things.  You need to trust that those taking shortcuts are competent and are working towards the same goal.  

Competent doesn’t just mean good at their job.  It also means they have the information they need to make those kinds of decisions. If I shortcut a process, I need to know why it is there.  What is the purpose of it?  Will this shortcut violate the intent of it or will it help us?  To make decisions like this we need information.  Part of having a competent team is having a well informed teach

The other important factor is that we are all working towards the same goal. If you want to allow people to take shortcuts you need to know that their shortcut will lead in the right direction. Sometimes, I think we can get lazy and let process be a substitute for vision. As long as people conform to the process we know they are heading in the correct direction right?  Even if it is slower.

To allow people to go outside of the process means to know that they are aligned with what you are trying to do.  This is a harder thing to do.  It is relatively easy to force people to comply with a process.  It is much harder to keep a team of people all rowing in the same direction.

Now let’s connect this all back to testing. Are your testing processes flexible enough to allow for shortcuts? Are you a competent tester? The kind of person people can trust with doing the right thing and getting the job done well. Are you aligning yourself with the business goals of the company?  Do you know what they are?

Sometimes we can complain about how there are processes in place, but if we want to be a part of a company that allows shortcuts in those processes, we need to be the kind of testers that can be trusted with this. Up your skills.  Learn new things. Build relationships. Learn the business.

Take a shortcut.

 

When Should you Automate?

When Should you Automate?

When should you automate?

Well let’s think about it for a minute. What is automation good at?  Repeating the same thing over and over again.  So when should you automate? When you have something you want to repeat over and over right?

But let’s think about it a bit more.  What does automation do?  Repeats the same thing over and over… What if you want to change something?  Your automation might just lock you into something you don’t want any more.

Automation is a powerful way to add leverage. If it is done well it can allow you to get way more done in the same amount of time.  However, it can also reduce flexibility. Let’s look at an example. You’ve created a high level test that checks for the existence of a certain value in a table.  Now you want to change something in your app so that the value will be in a div instead of a table. At this point you either have to get rid of the test, re-write it completely, or not make the change all.  Your automation has made you less flexible.

So we still haven’t answered the question: When should you automate? Let’s take a stab at a couple of heuristics that can be used to help answer that question

When you know what you want

If you have confidence that things will be a certain way for a significant amount of time, it probably makes sense to do some automation.  The designs are settled on and we are fairly certain this is the way we will go for the foreseeable future. In other words what we are looking at here is an estimate of the shelf life of the automation. How confident are you that this is something we want to lock ourselves into?

When the automation is simple

Sometimes though, automation still makes sense even when we don’t know what things will look like. For example, when the automation is simple to make.  If it only takes me two minutes to automate something it doesn’t really matter to much if the design changes next week and I have to throw that test out.

ROI

In a sneaky way, what I’ve been talking about here is your return on investment. How do you know when to automate?  Well it comes down to understanding how much leverage your automation will give you and for how long, as compared to the cost of creating that automation.  If it is easy to automate something, the ROI comes more quickly and so we can create it in more uncertain environments.  If it is hard to automate something, we need to have a lot more confidence that we will be able to use that automation for the long term.

I want to take a step back from this and think about how it applies to test automation in general.  I think there is a principle here that bears some reflection.  Software creation is an inherently unstable process. We are constantly changing things in response to customer and business needs. In many cases the shelf life of test automation is going to be quite short. It won’t be long before a test fails.  If we think about this in terms of ROI doesn’t that imply that we should bias towards smaller tests?  Smaller, easier to create automated tests give a return on investment more quickly. The more difficult a test is to create and maintain the more hesitant we should be about automating it.

So when should you automate? When it gives you a good return on your investment.

Hello? Hello?

Hello? Hello?

I’m sure we are all busy.  It seems to be a hallmark of our culture.  I’ve certainly been busy over the last couple of months! In addition to my regular activities with work, family and church, I’ve been working on a course about scripting for testers.  This course will be done in partnership with LinkedIn Learning and will be published on their platform.

Needless to say, getting this course together has been a lot of work, but I am very excited to be heading out to California to record it!  I’m also excited for the course itself.  I’ve talked a lot on this blog about the idea that testing is a technical job and that as testers we need to think about how we use technology and automation to do more than just run regression tests. This course is meant to help testers get started on the path to being technical testers.  Maybe you’ve been in a more traditional or manual testing role and need to learn some basic skills, but don’t know where to start.  The idea of this course is to give that basic grounding in helping you get started with that.

There is so much to do and learn that sometimes we just need some ideas and examples that show how others use these tools.  My hope with this course it to help testers realize what a powerful and useful tool scripting is and how we can become better testers as we use it.  I also hope to show that we don’t need to be scared of it.  There is a lot we can do with a limited amount of information.  The best way to learn is to do. Take some time today to learn and do something new!

When I started this blog, one of the reasons I wanted to do this was knowing that the best way to learn is to teach.  I have certainly found that to be true in putting together this course.  I has been a huge learning experience for me. I’m very thankful and excited to have had this opportunity and I’m sure I will be sharing some of my learning and experiences on this blog over the next couple of months. Stay tuned!

 

 

Expect Crashes

Expect Crashes

Dying in a car crash is one of the leading causes of death for those of us living in developed countries. It’s not surprising then that we spend a lot of time as a society trying to mitigate that risk. We implement things like speed limits and safety standards for vehicles and education programs for drivers to try and prevent crashes. Prevention is the best cure, and all that.

We don’t stop there though do we? We know that despite our best efforts, crashes are still going to happen and so we put in place things like seatbelts and airbags and safety rails. We also have tools in place to help us deal with the problems that arise after the crash.  We have ambulances, and paramedics and laws about moving over for emergency vehicles. We don’t just try to prevent crashes, we also try to mitigate the effect of crashes.

What I’ve been describing here is an approach to injury prevention that can be summarized with the Haddon Matrix.  We have a pre-event phase, a during event phase, and post-event phase and we have strategies to help mitigate the impact in each phase.

I like to take ideas from other fields and think about how they relate to testing, so let’s do that for a minute here.  What phase do we spend most of our time in as testers?

Traditionally it has been the pre-event phase.  We are trying to find the bugs before they ever make it to the customer.  We are trying to find the crashes and errors ahead of time.  We work primarily in the prevention realm. But shouldn’t we consider that despite our best efforts, some crashes will still happen? We will have issues that customers face, so what is our strategy at that point? What is our during and post event strategies for bugs that get exposed to customers?

Think about filling out something like the table below. I simplified the Haddon matrix by taking out environmental factors, but just the process of going through this could be a helpful way to see where you can invest as a company.  The ability to prevent problems is important and helpful, but as applications grow in size we will never be able to do that completely.  We need to have strategies in place to deal with what happens when things go south.  What are your strategies?

Phase Human Factors System Factors
Pre-Event
  • Testing
  • Dogfooding
  • Code Review
  • Feature Flags
  • Build Processes
  • Realistic Test Environments
During-Event
  •  Dynamic response to failures
  • Ability to debug in production
  • Immediate access to live production data
  • Logging & Alerts
  • Automatic fail safes
  • Self-healing capabilities
  • Flighting and rollback ability
Post-Event
  • Root cause analysis
  • Customer follow up
  • Quick build pipelines
  • Ability to get fixes to production in a timely manner

Selenium or TestCafe?

Selenium or TestCafe?

I’ve been looking into automation tools.  I was messing around with Selenium a bit and made some scripts to help us do some stuff more quickly.  Before investing too much in a particular tool though, I wanted to look around a bit at what else might be out there.  I came across TestCafe and heard some good things about it and so thought I’d give it a try.  I’m new to both tools and so I thought as a newbie why not compare the two? So here goes:

Looks

We need start with the important thing first: colors.  More specifically are there pretty things and do the colors make me happy?  Selenium/webdriver? Not really. TestCafe? Well, it has enough good looks to make a beauty queen jealous.

Joking aside, one of the things I like about TestCafe is that it gives me some info about what it is doing during the run with a status bar at the bottom.  This kind of gives a peek into the mind of the system and makes debugging easier.  TestCafe also give nice debug output in the console for failed tests

Winner: TestCafe

Installation and setup. 

What about setup?  How hard is it to get started?  For TestCafe, all I had to do was

npm install -g testcafe

and about 30 seconds later it was done.  My first test was running about 15 minutes later.  Selenium wasn’t too bad either, but I did have to install webdriver for a few browsers as well as pull in some selenium packages into python.  Since I was driving things through python for my test the selenium part was pretty easy:

python -m pip install selenium

but there was still some added complexity with getting webdriver to work for all browsers and setting up the first test was a little more complex as well.  All in all, it probably took about an hour to get my first test running with Selenium/webdriver.

Winner: TestCafe

Cross Browser

The whole purpose of this is to be able to more easily check thing across different browsers right?  So how easy is that to do?  With both tools I first ran the test in Chrome, because well, that is the browser all sane people use right? Once I had my test working in Chrome I tried running in other browsers. In both cases the test didn’t work in any other browsers. It took me a while with the Selenium test to work through the issues (mostly involving issues with timing and waits).

With TestCafe, I couldn’t get the test to work on any other browser. As far as I can figure out, it has to do with Javascipt errors related to using polymer components on our login page.  TestCafe has an option to skip Javascript errors and this let me get a little bit further, but I was still unable to complete the test.  My suspicion is that we are doing something a little off in the timing of loading our polymer web components. There does seem to be a fix coming in TestCafe that will let me work around this,  but at the end of the day, I was unable to get TestCafe to work on our app with any other browser than Chrome. I poked at it for an hour or two, and I’m sure there is a solution for it, but at this point I have not been able to test in other browsers

Winner: Selenium/Webdriver

Waits

Much like renewing your drivers license, the most annoying part of using Selenium is dealing with waiting.  The trick is to get it to make sure that what I want is there without letting it have a nap every few seconds. I probably spent more time on trying to figure out waits than anything else (and to be honest in the script I made there were still some explicit sleeps() in place).  With TestCafe, this just worked. They have implicit waits build into the async calls, and it worked out of the box. This is actually the primary reason I was able to get the first test working so quickly.  I didn’t have worry about waits.

Winner: TestCafe

Language

Selenium has a lot of support in various languages.  For me that meant I could use python and feel that joyful feeling that comes from coding in python.  It also means that you can write your tests in the same language as your app or your favorite language (which of course, is python).

TestCafe uses, um, Javascript. I don’t like writing code in Javascript.  Probably mostly because I haven’t done it much and don’t fully understand how things work, but there you have it.  On the plus side, it does give you a lot of power and flexibility on being able to hook into your app in some interesting ways.

Winner: Selenium

Maturity

Webdriver and selenium have been around for a long time. They have grey hair. They might even have considered dying it.  TestCafe, however, is fresh out of college and ready to take on the world.  Full of wide eyed wonder and optimism, it’s exciting to use and has all the energy of youthful optimism.

With age and maturity comes experience and webdriver has that in droves.  When you google around for answers to questions and problems you have, you find answers.  Lots of answers.  Answers of people who have been through what you’re going through and who have the scars to prove it.

TestCafe has seen the problems of webdriver and with all the enthusiasm of youth has decided to fix them out of the box.  This is really nice (see the waits section above), but when you do run into problems it’s a lot harder to find answers.  There just aren’t as many example of people hitting the problems you have and so you rely much more on the documentation (which is really good by the way).  Unfortunately documentation and well designed code still can’t anticipate every problem you will run into in the wild and having a large community around a tool is really helpful for figuring things out.

Winner: Selenium

Overall

I was really impressed with TestCafe and I really want it to be the winner, but unfortunately if I can’t figure out the cross browser issues I’m having, I won’t be able to. Maybe (hopefully) those are just some weird issues we have in our app and for most people this won’t be a problem.  I think that if you don’t see the weird issues I’m seeing on non-Chrome browsers, the overall winner would be TestCafe.

Winner: Selenium (For me, for now), TestCafe (If it works cross-browser on your app)