API Testing is Exploratory Testing

API Testing is Exploratory Testing

Sometimes there are debates about what testing is and what words we should use when talking about aspects of it. I don’t worry myself too much about those kinds of debates, but there is something I have found to be true in my experience:  At it’s core, all testing is exploratory.  You can’t really find out interesting stuff about an piece of software without doing exploration.

This holds true, no matter what kind of testing you are doing. Sometimes when we hear about aspects of testing that require more technical skill we think they require less exploration, but I really don’t think so. For example, I have been doing a lot of API testing and am working on courses that involve teaching API testing. This testing has involved a lot of exploration!

There are a lot of tools that can help you with API testing, and I have been using many of them. Let me be clear though: using tools does not preclude exploration. I found numerous bugs in the APIs, but I didn’t do it by having a tool read in some swagger specification and hitting run. I did it by exploring. It seems like most API testing tools are focused on helping you set up automated regression tests. There is a place of regression testing, but don’t forget that these tools can also work quite well in helping you explore the API.

I was reflecting on the kinds of bugs I found during some recent API testing sessions and I found that they generally speaking fell into a few categories. I think these categories show how much API testing involves exploration.

Design choices

Many of the issues I found had to do with design choices. In many cases, the information we needed was in the API, but it was given to us in a way that was hard to use. This could be because it could only be accessed through API endpoints that were not relevant to the current context, or it could be because similar information was presented in inconsistent ways in different parts of the API. When it comes to creating an API (as with any software) there are many different ways to design it. Evaluating how effective the design of an API is at solving the problem you are working on is a thoroughly exploratory process.

Missing functionality

I also found issues in the API related to it not providing or respecting information the business domain required. This could show up in various ways. Sometimes certain object states were not being represented in the API.  Other times it was not respecting domain permissions correctly. There were also times when the API interacted with other aspects of the product in an incorrect way. Each of these kinds of issues required knowledge of the business needs along with current and desired functional requirements. It would be hard (or even impossible) to find issues like this without exploration

Algorithmic problems

Some of the problems found were more algorithmic in nature. Things like scores not summing up correctly in some instances, or issues like rounding errors. Issues like this could probably be found in more scripted (i.e. less exploratory) approaches, but even here we require a lot of exploration to build up an understanding of the properties of the system. For example, a property of the system might be that a set of scores should be summed to produce the total score, except if the user overrides the total score. You might know about this property through a business specification, but how do you know how this property is represented in the API? You have to investigate. You have to use the API to see how the various aspects of this are represented, before you are able to make any scripted check for the correct usage of this property. You also have to explore to figure out contexts that are relevant for this property. What kinds of things might cause this to be wrong? What kinds of variations are there in other properties that might influence this?  What kind of inputs could mess with this?  These are all exploratory questions.

Conclusion

So in conclusion I want to make this point: if you want to learn about API testing don’t focus in the first place on how to do the detailed automation workflows that the various API testing tools provide you.  Focus on figuring out how to find out crucial information about the API.  Where are the problems?  What does this API do? How will clients use it? What impact does the design have on helping solve the problem it was written for?  There are many things to consider when testing an API and I hope to get into more details in future posts, but for now I’ll leave you with this thought:

Don’t think of API testing as primarily a technical challenge.  Think of it as an exploration challenge.

Advertisements

Watchdog or Service Dog?

Watchdog or Service Dog?

Are you a watchdog?  I’m speaking to testers here. Are you a watchdog? Is it your job to keep a close eye on the code and product and make sure no bugs come through? What do you do when you see a bug? Do you start barking up a storm and waking everyone up? BUG, BUG, BUG. We need to fix it!  No bugs shall pass!

Or, are you a service dog? You watch out for pitfalls, and you help others navigate them.  You don’t just alert others to the presence of a bug, you help them figure out how to fix it and how to avoid it.  Do you do something about the problems you find that goes beyond just telling people about it?

I’ve called you a dog for long enough, so let’s step out of that analogy for a minute. What I’m getting at here is to have us step back and think for a minute about what a tester does. I’m asking a lot of questions in this article, and not really answering them because I want you to think about it.  Do we just provide information and raise the alarm when things go wrong? Or, can we do more? Are we willing to fix mistakes or is it only our job to report them?

Are you a watchdog, or do you provide more services than just a loud bark and the ability to spot problems? If you only think of yourself as providing information about when things have gone wrong, it will affect they way you work. How important is it to file bugs that you find for example? Are there other ways to deal with them? What do you spend your time on as a tester? These and many other questions have different answers if you think about who you are and what your role involves.  So how do you define yourself?

Are you a watchdog, or do you provide other services as well?