Eleanor Roosevelt, the former First Lady of the United States, once said: Learn from the mistakes of others. You can’t live long enough to make them all yourself. She is right; learning from other people’s mistakes is much cheaper than learning from your own mistakes! This article is an attempt to bring your attention to some of the most common API testing mistakes we have seen, so you won’t make those mistakes while doing API testing so without further ado, let’s get started!
Table of Contents
- Not capturing HTTP request & response
- History of past test runs
- Slow tests
- Not running tests as part of CI/CD pipelines
- Dependent tests
- Focusing on quantity over quality
- Lack of reusability
- Confusing error messages
- Redundant coverage
- No clean-ups
Not capturing HTTP request & response
Request headers, body, and query strings are the input for your API, the data your API endpoint uses to do the job. On the other hand, response headers and body are the output of the work. Capturing request & response details helps you to replicate the problem. Moreover, this information helps you to start the debugging process.
When a test runs, it should generate a log. Ideally, the log should contain:
- The endpoint URL & attached query string values
- HTTP headers in the request, including the implicitly set HTTP headers
- When available, the request body
- HTTP headers in the response, all of the provided ones by endpoint
- Response body
- Response Status Code
- Response time
- Execution time
- Validations and their result
Next time a test fails, you can quickly look at the captured request and response information to commence the investigation process.
History of past test runs
By nature, tests fail. When a test fails, we often ask two questions:
- Why did it fail?
- Has this test failed in the past?
We can answer the first question quickly by looking at the test result, which we discussed in the previous paragraph. But, what about the second question? To answer the second question, we need to keep past test results. Storing test results permanently help you go back in time and inspect the behavior of any test at any given time. When storing test results, please keep the following in mind:
- History of test results should be easily accessible. Storing raw log files and using command-line tools to search is not time-consuming and requires knowledge of the command-line tools.
- Historical test results should contain URL, query strings, request headers and body, response headers, body, code, response time, execution time, and included validations.
Nothing is worst than waiting hours for the test results! You and your team want feedback from the tests, and you want it fast! Your API is fast and responds in milliseconds; why does it take hours to get the test results? Here are some of the main reasons why your tests are slow:
Your framework or tool of choice takes her time! When evaluating tools for API testing, make sure to test their execution time.
Tests are executed one after the other, not in parallel with each other! For example, let’s say running a test takes 15 seconds; how long does it take to run 50 tests serially? More than 12 minutes! What if we run them in parallel? Probably, 15 seconds! When picking an API testing framework or tool, please consider the ones that run tests in parallel. Speaking of API testing tools, we have an in-depth review of Top 10 API testing tools as well.
Running tests on slow infrastructure results in slow execution of tests which is one main reason why you should consider using tools like Testfully for API testing rather than provisioning your infrastructure for API test automation.
Not running tests as part of CI/CD pipelines
Catching bugs and verifying the correctness of API are two main reasons we exercise automated API testing. That said, we often see that teams do not include API integration tests in their Continuous Integration (CI) or Continuous Deployment (CD) pipelines or include them as an optional step. Including your CI/CD pipeline tests helps you catch bugs early and before they arrive at production. While building your tests, consider the following items:
Make sure it’s possible to run your tests as part of your CI/CD pipeline. For example, Testfully CLI allows you to run your tests as part of a CI/CD pipeline.
When included in CI/CD pipelines, you should trigger deployments only when all tests pass.
A test that depends on something to pass is not considered a good test. Experience has shown us that these tests tend to fail frequently and introduce maintenance overhead. A test can depend on many things; let’s go through some of the most common ones:
Some tests depend on other tests to pass, often on the data that the other test generates. If your test requires some data, the test itself should be in charge of preparing it. Testfully’s multi-step test scenarios allow you to prepare data for your tests.
Sometimes, order of test execution becomes important. This can happen for many reasons, but “missing clean-up steps in some tests” is one of the most common ones. However, it would be best to run your tests in any order, and they should continue to pass.
time of the day, month, and year sometimes matter. For example, I remember once a test started failing because of the daylight saving in Australia!
Focusing on quantity over quality
The total number of your tests does not mean anything; the quality of your tests does. Rather than concerning yourself about the total number of tests, focus on what you have covered and what you need to focus on next.
If you want to measure something, focus on your team’s confidence in new releases. How much confidence does your current test strategy give to your team for the next big release?
Lack of reusability
Reusable tests help you to test more without spending more time on your tests. Reusable tests allow you to test APIs in different environments and test APIs using other inputs. Moreover, it prevents test duplication and reduces the maintenance overhead. To help you define reusable test cases, we have put together a set of best practices for you to consider:
URLof endpoints via config. Using this technique enables running tests against multiple environments.
- If your test requires input, try to load them via config values or variables in your test.
Confusing error messages
A good test is a test that fails with a decent error message, an error message that helps you understand what went wrong and why it went wrong.
The best way to assess the quality of your error messages is to let your test fail. Tweak your test and run it, inspect the error messages. Is it clear? If not, make sure to change your tests to include better error messages. Moreover, ask for feedback from your colleagues. As the author of a test, it’s easy for you to understand the reason behind a failure. Would it also be easy for your colleague to understand the root cause of an issue by looking at the error message?
A good test fails when it catches a bug. On the other hand, a not-so-good test keeps failing for unrelated reasons, don’t define those tests. Redundant coverage is when your test validates something that is not a concern to the tests. Redundant coverage includes:
- Checking response time when it does not matter to your test how fast the API responds to your requests
- Validating the entire response body when you only need to validate a portion of the response body
A good test cleans up after itself meaning that it deletes the records it adds to the system after the test execution. As we mentioned earlier, your test may need some records in DB, which the test itself should prepare them. Moreover, when the test is done, it should also delete the records to prevent the new data from causing issues for other test cases.