Blog / API Testing

Top 10 API testing mistakes

Eleanor Roosevelt once said: Learn from the mistakes of others. You can't live long enough to make them all yourself. She is right; learning from other people's mistakes is much cheaper than learning from your own mistakes! This article is an attempt to bring your attention to some of the most common API testing mistakes we have seen, so you won't make those mistakes while doing API testing.

Written byMatt
Published OnWed Aug 04 2021
Last UpdatedWed Aug 04 2021

Eleanor Roosevelt, the former First Lady of the United States, once said: Learn from the mistakes of others. You can’t live long enough to make them all yourself. She is right; learning from other people’s mistakes is much cheaper than learning from your own mistakes! This article is an attempt to bring your attention to some of the most common API testing mistakes we have seen, so you won’t make those mistakes while doing API testing so without further ado, let’s get started!

Table of Contents

Not capturing HTTP request & response

Request headers, body, and query strings are the input for your API, the data your API endpoint uses to do the job. On the other hand, response headers and body are the output of the work. Capturing request & response details helps you to replicate the problem. Moreover, this information helps you to start the debugging process.

When a test runs, it should generate a log. Ideally, the log should contain:

Next time a test fails, you can quickly look at the captured request and response information to commence the investigation process.

History of past test runs

By nature, tests fail. When a test fails, we often ask two questions:

  1. Why did it fail?
  2. Has this test failed in the past?

We can answer the first question quickly by looking at the test result, which we discussed in the previous paragraph. But, what about the second question? To answer the second question, we need to keep past test results. Storing test results permanently help you go back in time and inspect the behavior of any test at any given time. When storing test results, please keep the following in mind:

Slow tests

Nothing is worst than waiting hours for the test results! You and your team want feedback from the tests, and you want it fast! Your API is fast and responds in milliseconds; why does it take hours to get the test results? Here are some of the main reasons why your tests are slow:

Not running tests as part of CI/CD pipelines

Catching bugs and verifying the correctness of API are two main reasons we exercise automated API testing. That said, we often see that teams do not include API integration tests in their Continuous Integration (CI) or Continuous Deployment (CD) pipelines or include them as an optional step. Including your CI/CD pipeline tests helps you catch bugs early and before they arrive at production. While building your tests, consider the following items:

Dependent tests

A test that depends on something to pass is not considered a good test. Experience has shown us that these tests tend to fail frequently and introduce maintenance overhead. A test can depend on many things; let’s go through some of the most common ones:

Focusing on quantity over quality

The total number of your tests does not mean anything; the quality of your tests does. Rather than concerning yourself about the total number of tests, focus on what you have covered and what you need to focus on next.

If you want to measure something, focus on your team’s confidence in new releases. How much confidence does your current test strategy give to your team for the next big release?

Lack of reusability

Reusable tests help you to test more without spending more time on your tests. Reusable tests allow you to test APIs in different environments and test APIs using other inputs. Moreover, it prevents test duplication and reduces the maintenance overhead. To help you define reusable test cases, we have put together a set of best practices for you to consider:

Confusing error messages

A good test is a test that fails with a decent error message, an error message that helps you understand what went wrong and why it went wrong.

The best way to assess the quality of your error messages is to let your test fail. Tweak your test and run it, inspect the error messages. Is it clear? If not, make sure to change your tests to include better error messages. Moreover, ask for feedback from your colleagues. As the author of a test, it’s easy for you to understand the reason behind a failure. Would it also be easy for your colleague to understand the root cause of an issue by looking at the error message?

Redundant coverage

A good test fails when it catches a bug. On the other hand, a not-so-good test keeps failing for unrelated reasons, don’t define those tests. Redundant coverage is when your test validates something that is not a concern to the tests. Redundant coverage includes:

No clean-ups

A good test cleans up after itself meaning that it deletes the records it adds to the system after the test execution. As we mentioned earlier, your test may need some records in DB, which the test itself should prepare them. Moreover, when the test is done, it should also delete the records to prevent the new data from causing issues for other test cases.

Testfully is a bootstrapped startup from Sydney, Australia.
We're funded by our supportive & amazing customers.

The word `testfully` is a registered trademark of Testfully Pty Ltd.