API Testing

Top 10 API testing mistakes

Explore essential tactics for avoiding common API testing problems, including capturing detailed test data, integrating tests into CI/CD pipelines, and efficiently using testing tools to ensure strong, secure APIs.

Written by Matt Valley
Published On Wed Aug 04 2021
Last Updated Fri Feb 16 2024

Eleanor Roosevelt, the former First Lady of the United States, once said: Learn from the mistakes of others. You can’t live long enough to make them all yourself. She is right; learning from other people’s mistakes is much cheaper than learning from your own mistakes! This article is an attempt to bring your attention to some of the most common API testing mistakes we have seen, so you won’t make those mistakes while doing API testing so without further ado, let’s get started!

Not capturing HTTP request & response

Request headers, body, and query strings are the input for your API, the data your API endpoint uses to do the job. On the other hand, response headers and body are the output of the work. Capturing request & response details helps you to replicate the problem. Moreover, this information helps you to start the debugging process.

When a test runs, it should generate a log. Ideally, the log should contain:

  • The endpoint URL & attached query string values
  • HTTP headers in the request, including the implicitly set HTTP headers
  • When available, the request body
  • HTTP headers in the response, all of the provided ones by endpoint
  • Response body
  • Response Status Code
  • Response time
  • Execution time
  • Validations and their result

Next time a test fails, you can quickly look at the captured request and response information to commence the investigation process.

History of past test runs

By nature, tests fail. When a test fails, we often ask two questions:

  1. Why did it fail?
  2. Has this test failed in the past?

We can answer the first question quickly by looking at the test result, which we discussed in the previous paragraph. But, what about the second question? To answer the second question, we need to keep past test results. Storing test results permanently help you go back in time and inspect the behavior of any test at any given time. When storing test results, please keep the following in mind:

  • History of test results should be easily accessible. Storing raw log files and using command-line tools to search is not time-consuming and requires knowledge of the command-line tools.
  • Historical test results should contain URL, query strings, request headers and body, response headers, body, code, response time, execution time, and included validations.

Slow tests

Nothing is worst than waiting hours for the test results! You and your team want feedback from the tests, and you want it fast! Your API is fast and responds in milliseconds; why does it take hours to get the test results? Here are some of the main reasons why your tests are slow:

  • Your framework or tool of choice takes her time! When evaluating tools for API testing, make sure to test their execution time.

  • Tests are executed one after the other, not in parallel with each other! For example, let’s say running a test takes 15 seconds; how long does it take to run 50 tests serially? More than 12 minutes! What if we run them in parallel? Probably, 15 seconds! When picking an API testing framework or tool, please consider the ones that run tests in parallel. Speaking of API testing tools, we have an in-depth review of Top 10 API testing tools as well.

  • Running tests on slow infrastructure results in slow execution of tests which is one main reason why you should consider using tools like Testfully for API testing rather than provisioning your infrastructure for API test automation.

Not running tests as part of CI/CD pipelines

Catching bugs and verifying the correctness of API are two main reasons we exercise automated API testing. That said, we often see that teams do not include API integration tests in their Continuous Integration (CI) or Continuous Deployment (CD) pipelines or include them as an optional step. Including your CI/CD pipeline tests helps you catch bugs early and before they arrive at production. While building your tests, consider the following items:

  • Make sure it’s possible to run your tests as part of your CI/CD pipeline. For example, Testfully CLI allows you to run your tests as part of a CI/CD pipeline.

  • When included in CI/CD pipelines, you should trigger deployments only when all tests pass.

Dependent tests

A test that depends on something to pass is not considered a good test. Experience has shown us that these tests tend to fail frequently and introduce maintenance overhead. A test can depend on many things; let’s go through some of the most common ones:

  • Some tests depend on other tests to pass, often on the data that the other test generates. If your test requires some data, the test itself should be in charge of preparing it. Testfully’s multi-step test scenarios allow you to prepare data for your tests.

  • Sometimes, order of test execution becomes important. This can happen for many reasons, but “missing clean-up steps in some tests” is one of the most common ones. However, it would be best to run your tests in any order, and they should continue to pass.

  • time of the day, month, and year sometimes matter. For example, I remember once a test started failing because of the daylight saving in Australia!

Focusing on quantity over quality

The total number of your tests does not mean anything; the quality of your tests does. Rather than concerning yourself about the total number of tests, focus on what you have covered and what you need to focus on next.

If you want to measure something, focus on your team’s confidence in new releases. How much confidence does your current test strategy give to your team for the next big release?

Lack of reusability

Reusable tests help you to test more without spending more time on your tests. Reusable tests allow you to test APIs in different environments and test APIs using other inputs. Moreover, it prevents test duplication and reduces the maintenance overhead. To help you define reusable test cases, we have put together a set of best practices for you to consider:

  • Load URL of endpoints via config. Using this technique enables running tests against multiple environments.
  • If your test requires input, try to load them via config values or variables in your test.

Confusing error messages

A good test is a test that fails with a decent error message, an error message that helps you understand what went wrong and why it went wrong.

The best way to assess the quality of your error messages is to let your test fail. Tweak your test and run it, inspect the error messages. Is it clear? If not, make sure to change your tests to include better error messages. Moreover, ask for feedback from your colleagues. As the author of a test, it’s easy for you to understand the reason behind a failure. Would it also be easy for your colleague to understand the root cause of an issue by looking at the error message?

Redundant coverage

A good test fails when it catches a bug. On the other hand, a not-so-good test keeps failing for unrelated reasons, don’t define those tests. Redundant coverage is when your test validates something that is not a concern to the tests. Redundant coverage includes:

  • Checking response time when it does not matter to your test how fast the API responds to your requests
  • Validating the entire response body when you only need to validate a portion of the response body

No clean-ups

A good test cleans up after itself meaning that it deletes the records it adds to the system after the test execution. As we mentioned earlier, your test may need some records in DB, which the test itself should prepare them. Moreover, when the test is done, it should also delete the records to prevent the new data from causing issues for other test cases.

Ignoring Test Environment Differences

The difference between testing and production environments can result in issues that are not detected until after deployment. Ensuring that the testing environment is similar to the production environment is an important part of API testing. This alignment helps identify potential issues impacting the API’s performance and functioning in a real-world environment. Simulating production conditions enables developers to correctly examine API behaviour and performance, allowing for modifications before public release.

Ignoring API Documentation Testing

API documentation is essential for internal developers and external consumers, as it provides information on using the API efficiently. Testing the quality and completeness of this documentation is critical since it ensures that users fully understand and correctly use the API’s features. Documentation should be clear, up-to-date, and comprehensive, covering all features of the API, such as endpoints, parameters, and sample requests and responses. This improves user experience, minimizes the learning curve, and prevents integration concerns.

Underestimating the Importance of Manual Testing

While automated testing is essential in API testing, it can only partially replace the insights and nuances gained from manual testing. Manual testing, particularly exploratory testing, can identify issues that automated tests might miss, such as usability problems or unexpected behaviour in edge cases. This feature enables testers to explore the API’s capabilities beyond preset test cases, allowing for greater flexibility and creativity.

Not Utilizing API Testing Tools to Their Full Potential

Many API testing tools are available, each with its own features and capabilities. Tools like Testfully, Postman and SoapUI allow you to create, execute, and manage API tests. These tools cover a variety of testing types, including functional, performance, and security testing, and they include functionality for automating test execution and connecting with continuous integration/continuous deployment (CI/CD) pipelines. To maximize the utility of these tools, leverage advanced features like mock services, automated test generation, and detailed reporting to improve testing speed and coverage.

Conclusion

Learning from frequent API testing errors improves efficiency and effectiveness. Lessons learned include capturing entire data, integrating CI/CD, prioritizing quality, and making the most out of testing tools. These strategies provide dependable, secure APIs and emphasize the need to learn tips and tricks from others to improve API testing procedures, which improves overall quality and customer satisfaction.

Comments, Questions, or Feedback? Post it here!

0 comments

Testfully is a bootstrapped startup from Sydney, Australia.
We're funded by our supportive & amazing customers.

The word `testfully` is a registered trademark of Testfully Pty Ltd.