30 Days of API Testing – What is it?

30 Days of API Testing – What is it?

Ministry of Testing has another 30 days of testing challenge, this time around API testing.  Looks like fun, and I’ve been doing a lot of API testing lately, so I’ll follow along and see how many of them I can do over the next month.

The first one on the list is to define what API testing is. I actually see two distinct parts to API testing.  In one case you could consider it to be testing of APIs themselves, but in another case you could look at it as testing an application or service using the APIs.

I tend to think of API testing in the second sense, although of course there is overlap between the two. To me, API testing is about using the API to help you discover useful information about the product. That might sometimes be finding actual bugs in the API, but often it can mean using the API to drive testing of other things, or finding issues in the way different parts of the service work together.

Not a formal definition, but that’s the way I think about it. How do you approach it?

Advertisements

API Testing Glossary

API Testing Glossary

I’ve been doing a fairly deep dive on API testing over the last several month, both as part of the project I am currently working on and as part of some courses I am preparing. As with any specialization there is a lot of terminology that goes into it and so I thought I would put together a post that summarizes a number of definitions related to API testing.

Many of these definitions can have different meanings depending on the context you are working in and they are used by your specific team, so don’t take this as definitive. They are just the way I have explained them to myself so that I can better understand and conceptualize them. By defining these terms I am better able to wrap my mind around them and to use them to do better testing.  Hopefully they help you out too in your testing journey

REST

Acronym for REpresentation State Transfer.  This is a protocol that is based on the dissertation of a guy name Roy Fielding. Obviously there is a lot that goes into this, but in really simple terms, a RESTful API is one that consistently applies actions (verbs) like GET, POST, PUT and DELETE to resources (nouns) which are usually a url that may have some parameters.

Further Reading:

Martin Folwer’s take on using REST well

Ruben Verborgh has a good explanation on Quora

Hypermedia

People will argue about this (surprise), but some say that an API is only truly RESTful if it uses a hypermedia approach. Hypermedia means that the server tells you what resources are available for you to use. Every request to the server should tell you what other resources and actions are available to you on objects that are related to the request you just made.  Sounds confusing? You are already pretty used to it. You came on this web page and there are number of links here that allow you navigate to other places on the web. When we simplify it down, that is really all hypermedia in an API is doing. It is telling you some other links (endpoints) and actions that you can use in the API.

HATEOS

Speaking of Hypermedia, let’s just make up a big long hard to say acronym that describes it’s usage. HATEOS stands for Hypermedia As The Engine Of State and is just a way of making people argue about pronunciation saying that your REST API uses Hypermedia.

GraphQL

It’s new! It’s exciting! It shall rule the world! Ok, in reality we like to get excited about new things in the tech space, but GraphQL is just another way of specifying an API. It is a query language that helps to optimize some things in network API calls and so for applications that have high performance requirements it can be very helpful. It is a bit more complex and rigid then REST though.

Idempotent

Big word for saying that every time you perform a call you get back the same result. I’ll give you a silly example to help you remember.  Think about putting snow tires on a car. Once you have them on you have a car with snow tires. Now if you repeat that ‘request’ you will end up with the same thing. A car with snow tires. In an API this would be a PUT call.  No matter how many times you send the call (with the same parameters), it should always give you back the same result.

Safety

Speaking of idempotency, a GET call is idempotent (you get the same result every time you execute it), but it has an additional factor called safety.  Safety just means that nothing changes when you issue the command. Let’s use a silly example again. Imagine a bookshelf.  You bend your head sideways a read the title off the spine of a book. Nothing has changed and no matter how many times you do that nothing will change. This is an example of a safe call.

Verbs and Nouns

No this isn’t English class, but we do sometimes talk about nouns and verbs in APIs. Verbs are the actions that an API can do (like GET, POST, PUT, DELETE) and the nouns are what the API acts on (resources usually represented by url endpoints)

Services (micro-services)

Buzzword time. So what is a micro-service? Well, it is a service that is very small, or uh, micro sized (see what I did there?). And a service is just something that does stuff and that lets you tell it how to do stuff (usually through APIs).  See it doesn’t have to be hard! Let’s use an example to get a better handle on it.  You want to create a meme (because you are that kind of cool) but photo editing is just to passe for you.  If only you had something you could send a command to that said ‘generate a cat meme for me with these words.’  Well if you did, that would be an example of a service – memes as a service in fact. You give it some commands.  It does something for you and produces an output for you based on the commands you gave.

Micro-services is just an architectural pattern that tries to have a number of services that can each do one specific thing. These various services can then talk to each other through the defined APIs. The micro part just means that each service has a limited number of tasks that it can do. So you might break down a customer facing service into a number of micro-services that each do one particular part of the overall task at hand.

Schema

Adding this one in based on comments. A schema is used to define the structure of the data in an API. It defines things like what the API can do with various endpoints and what data it is allowed to use. If you have a well defined schema for your API it can be used to automate things like the creation of documentation and sometimes even the way  it is used. If you have defined you schema in the Open API format you can use tools like Swagger to help with this.

All good things must end – including this list

I’m sure there are other terms to define in API testing. If there are any you find confusing, let me know and perhaps I’ll add to this list over time. Maybe I’ll even get ambitious and make this into it’s own page on the site. We will see.

Let me know what you think of these definitions!

 

How to do Demos?

How to do Demos?

I’m not sure how I feel about doing demos for the ‘higher ups’ in the company. It seems that when we do a demo, we end up with a carefully crafted environment, that perhaps has a few hard coded values in place. It runs on a local developer’s machine with code that has not yet been put up for review. We can run through aspects of the feature and show what it can do, which is good in that it allows for feedback on things.

But there is a downside too.

Have we got accessibility in place?  Is the code fully localized? Do we have a bunch of edge cases that we didn’t demo to clean up?  What about those hard coded variables we put in place to make the demo work? What happens when that code gets off the developer’s machine and out into the wild? How will it perform under real life loads and stresses?  There can be a LOT of work left to do on something that was shown in a demo, but does that really come through?

It seems like what can happen sometimes is we put our best foot forward in a demo, but we don’t show the things that are broken. We might mention some of the additional work that we need to do, but at the end of the day a picture is worth a thousand words.  When someone sees a ‘working’ demo, and then hears about a few bullet points of additional work, what will they walk away from the meeting feeling?

‘Oh this is almost done’

Is it though?

I think a lot of demos leave people with a false impression of the state of the feature, and this can lead to problems. It can lead to schedule pressures – after all it looked almost ready last week so why isn’t it released yet? This can really get bad if people start talking to customers or even just internally ‘bragging’ about the feature.

Solution

I wish I had a magic solution, but I don’t. Well, I guess I do have one – don’t do them – but perhaps that is a bit extreme. I was just struck by the thought that this can be a problem, and I still need to think more about what can be done about it.  Any thoughts?  Have you ever seen this problem on teams you’ve worked with?

What Makes good Quality Code?

What Makes good Quality Code?

There are a number of interesting comments in this thread about how to define quality code. I was reading through them and I think every single one of them could be summarized like this:

good quality code is code that make it easy for developers to add value

Every comment in there was some variation of this. There were many different approaches to how you can make code that does this. Some say you need unit tests. Some say it needs to be readable. Some say it needs good structure. Others say it needs to be designed to guide developers to do that right thing, and still others that it needs to be extensible. All of these seems to me like they might be good approaches to achieving the goal, but at the end of the day they all boil down to different theories on how to make code that enables developers to easily add value

I was thinking about this a bit and I think framing it in this way can actually help us come up with strategies for making good quality code. When we frame it in terms of making it easy to add value, it seems to me the next question is how does this particular code base add value? That of course pulls us into the business context the code operates in and supports. Thinking about that has a huge effect on what we would consider to be good quality code.

For example, I’ve written code that was not very well structured and modular before, but I would say it was still good quality code.  It saved me some time in writing reports, but it was code that was only for my use and it was small enough that I could remember what it did and easily tweak when I needed to. It was limited enough in it’s scope (and only had one user: me) that I could very easily test that it worked after changes. It was easy for me to add value with that code and since I was the only one who needed to change it, it was good quality code.

Now in many large scale systems with multiple developers working on it, the approaches I took to that script would have produced very low quality code (i.e code that made it hard for developers to add value). Understanding how your code is going to add value helps along to path towards figuring out how to make your code good quality.  I think when we start to divorce quality from value we end up in trouble.

Good quality code makes it easy to add value. How does the code you work on add value?  What things do you need to do to make sure it is easy for developers to continue to add (more) value?

Shipping Doesn’t Mean Done

Shipping Doesn’t Mean Done

Shipping doesn’t mean done. I read the phrase recently and it has been bouncing around in my head for while.

This is the mind shift that needs to take place for devops to be successful. There are so many definitions of done in software development, but we are used to the final arbiter of ‘doneness’ being shipping. We’ve shipped the feature so it’s done now whether we like it or not right?

Devops says no. Shipping does not mean done. It just means the feature is done enough to show to customers and get feedback on it. It means it is ready for the tweaking it is going to need in order to really add value.

If we don’t get this – and mean gut level get it, not just mental ascent get it – devops is going to be hard. For example: If your delivery schedule looks like this, you are probably still thinking of shipping as meaning done.

Untitled drawing (1)

The problem is devops doesn’t just mean the developers are on call when something goes wrong. It means the schedule should look more like this
Untitled drawing (2)

See? Shipping doesn’t mean done. There is so much value that we can add (as testers and developers) after shipping. If we don’t explicitly put that in the schedule, we are missing out on a lot of the benefits of devops.

And just to make sure I’m not putting too fine of a point on it, the key thing I’m talking about here is the schedule. If we don’t build that ‘after release’ time into the schedule, we are not going to be successful at devops.

When our team schedules look as if shipping means we are done, someone somewhere isn’t understanding or internalizing what we are trying to do in devops. We need to build time into the schedule for experimenting with and improving ‘in production’ features.  If we don’t do that we might as well just go back to waterfall, because then at least you are theoretically reducing the amount of buggy code you foist on your customers.

Don’t forget. Shipping doesn’t mean done!

API Testing is Exploratory Testing

API Testing is Exploratory Testing

Sometimes there are debates about what testing is and what words we should use when talking about aspects of it. I don’t worry myself too much about those kinds of debates, but there is something I have found to be true in my experience:  At it’s core, all testing is exploratory.  You can’t really find out interesting stuff about an piece of software without doing exploration.

This holds true, no matter what kind of testing you are doing. Sometimes when we hear about aspects of testing that require more technical skill we think they require less exploration, but I really don’t think so. For example, I have been doing a lot of API testing and am working on courses that involve teaching API testing. This testing has involved a lot of exploration!

There are a lot of tools that can help you with API testing, and I have been using many of them. Let me be clear though: using tools does not preclude exploration. I found numerous bugs in the APIs, but I didn’t do it by having a tool read in some swagger specification and hitting run. I did it by exploring. It seems like most API testing tools are focused on helping you set up automated regression tests. There is a place of regression testing, but don’t forget that these tools can also work quite well in helping you explore the API.

I was reflecting on the kinds of bugs I found during some recent API testing sessions and I found that they generally speaking fell into a few categories. I think these categories show how much API testing involves exploration.

Design choices

Many of the issues I found had to do with design choices. In many cases, the information we needed was in the API, but it was given to us in a way that was hard to use. This could be because it could only be accessed through API endpoints that were not relevant to the current context, or it could be because similar information was presented in inconsistent ways in different parts of the API. When it comes to creating an API (as with any software) there are many different ways to design it. Evaluating how effective the design of an API is at solving the problem you are working on is a thoroughly exploratory process.

Missing functionality

I also found issues in the API related to it not providing or respecting information the business domain required. This could show up in various ways. Sometimes certain object states were not being represented in the API.  Other times it was not respecting domain permissions correctly. There were also times when the API interacted with other aspects of the product in an incorrect way. Each of these kinds of issues required knowledge of the business needs along with current and desired functional requirements. It would be hard (or even impossible) to find issues like this without exploration

Algorithmic problems

Some of the problems found were more algorithmic in nature. Things like scores not summing up correctly in some instances, or issues like rounding errors. Issues like this could probably be found in more scripted (i.e. less exploratory) approaches, but even here we require a lot of exploration to build up an understanding of the properties of the system. For example, a property of the system might be that a set of scores should be summed to produce the total score, except if the user overrides the total score. You might know about this property through a business specification, but how do you know how this property is represented in the API? You have to investigate. You have to use the API to see how the various aspects of this are represented, before you are able to make any scripted check for the correct usage of this property. You also have to explore to figure out contexts that are relevant for this property. What kinds of things might cause this to be wrong? What kinds of variations are there in other properties that might influence this?  What kind of inputs could mess with this?  These are all exploratory questions.

Conclusion

So in conclusion I want to make this point: if you want to learn about API testing don’t focus in the first place on how to do the detailed automation workflows that the various API testing tools provide you.  Focus on figuring out how to find out crucial information about the API.  Where are the problems?  What does this API do? How will clients use it? What impact does the design have on helping solve the problem it was written for?  There are many things to consider when testing an API and I hope to get into more details in future posts, but for now I’ll leave you with this thought:

Don’t think of API testing as primarily a technical challenge.  Think of it as an exploration challenge.

Watchdog or Service Dog?

Watchdog or Service Dog?

Are you a watchdog?  I’m speaking to testers here. Are you a watchdog? Is it your job to keep a close eye on the code and product and make sure no bugs come through? What do you do when you see a bug? Do you start barking up a storm and waking everyone up? BUG, BUG, BUG. We need to fix it!  No bugs shall pass!

Or, are you a service dog? You watch out for pitfalls, and you help others navigate them.  You don’t just alert others to the presence of a bug, you help them figure out how to fix it and how to avoid it.  Do you do something about the problems you find that goes beyond just telling people about it?

I’ve called you a dog for long enough, so let’s step out of that analogy for a minute. What I’m getting at here is to have us step back and think for a minute about what a tester does. I’m asking a lot of questions in this article, and not really answering them because I want you to think about it.  Do we just provide information and raise the alarm when things go wrong? Or, can we do more? Are we willing to fix mistakes or is it only our job to report them?

Are you a watchdog, or do you provide more services than just a loud bark and the ability to spot problems? If you only think of yourself as providing information about when things have gone wrong, it will affect they way you work. How important is it to file bugs that you find for example? Are there other ways to deal with them? What do you spend your time on as a tester? These and many other questions have different answers if you think about who you are and what your role involves.  So how do you define yourself?

Are you a watchdog, or do you provide other services as well?