Introducing Testfully Offline: Your Cloud-Free API Client Solution and a powerful Postman Scratch Pad alternative. Download
False positive & false negative in software testing

False positive & false negative in software testing

Automated software testing is one of the critical components of software development and is essential for ensuring quality in software products. As a result, companies switch from traditional manual testing to cost-efficient automated software testing to test more often with less effort and improve the quality of their software products.

That said, automated software testing is challenging, and to get it right, one must have the knowledge and previous experience. Moreover, it becomes more difficult when you realize that the test cases can lie to you about the presence of bugs in your software. Therefore, this article covers false positives and false negatives, the two ways a test can lie to you.

Table of Contents

Looking for an API testing tool?

Testfully offers multi-step tests, local & deployed API testing, team collaboration, historical test results and many more features. Import from Postman supported!

False Positive & False Negative analogy

Experts in automated software testing have borrowed False Positive and False Negative terms from the medical examination field. In the medical field, the purpose of a test is to determine whether the patient has a particular medical condition or not.

The words positive and negative relate to the result of a hypothesis. Positive implies that the hypothesis was true, and negative means that the hypothesis was false.

With that said, the result of a medical examination can be one of the following items:

Test Result Description
True Positive suggests that the patient does have the medical condition.
False Positive mistakenly indicates that the patient does have the medical condition when there is none.
True Negative suggests that the patient does not have the medical condition.
False Negative mistakenly indicates that the patient does not have the medical condition when in fact, the patient has the disease.

Depending on the desired test result, both positive and negative can be considered bad. For example, in a test for COVID, you want a negative test result. Although a positive result is deemed to be bad, a False Negative is the worst. Thus, while you’re under the impression that you don’t have the COVID disease, you do, and therefore may not be aware that you need medication or spreading the virus to others.

False Positive / Negative in Software Testing

Automated tests in software testing are responsible for verification of the software under test and for catching bugs. In this context, positive means that at least one test found a bug or a misfunction feature. Moreover, negative means that no test found a bug or misfunction feature in code.

Ideally, all automated tests should give negative signals. However, in reality, some tests show False Positive or False Negative signals.

False Positive in Software Testing

In the context of automated software testing, a False Positive means that a test case fails while the software under test does not have the bug in which the test tries to catch. As a result of a false positive, test engineers spend time hunting down a bug that does not exist.

A false positive indicates a bug when there is none.

While false positive results have no impact on the software product, they might upset engineers. As a result, some engineers might lose their faith in the test suite and start removing tests with a false positive result.

False Negative in Software Testing

In the context of automated software testing, a False Negative means that a test case passes while the software contains the bug that the test meant to catch. As a result of a false negative, bugs land in the production software and cause issues for the customers.

A false negative indicates no bug when there is one.

Both false positives and false negatives are considered harmful. While a false positive wastes your time, false negative lies to you and lets a bug remain in software indefinitely. That said, false negatives get the worst press since they are more damaging, and it introduces a false sense of security.

False Negatives & False Positives in Static code analysis

Static code analysis is a software development process that analyzes computer software to identify potential errors, both semantic and syntactic, even before the code is run. As such, both false negatives and false positives apply to this field as well. In the static code analysis field, a positive result is bad news; it suggests a defect in the source code. However, a false negative is the worst since you are not aware of the defect in the code.

False Negatives & False Positives in Dynamic code analysis

Dynamic code analysis is the process of evaluating computer software for quality and correctness. Dynamic analysis involves executing the program to detect defects, whereas static code analysis analyzes code without running it. A positive result is good in the context of a code coverage tool since it suggests that you have achieved the minimum desired code coverage. Conversely, a false positive in this context means you have not covered some code area, but you think you have.


In summary, the context in which both positive and negative terms are used defines whether positive or negative is good or bad, making the concept confusing. Luckily, there is an easy formula to remember; it helps you figure out whether the false positive is the worse or the false negative.

If a positive is bad, a false negative is worse. If a positive is good, a false positive is worse!

Catch false negatives in tests

False Positive tests are easy to catch. When a test fails, we can look at the root cause for the failure and decide whether it was a false positive or not. How about false negatives? How to catch them? Remember, a test with false negative lies to you by not failing when it should fail.

Practically, one reason why a test case does not fail can be because of a False Negative signal. Should we lose our faith in automated software testing then? The answer is: No.

To catch false negatives in tests, one should practice a technique known as mutation testing.

Software engineers exercise mutation testing by changing the code and introducing a bug continued by running the test responsible for catching the bug. In this situation, a test that passes is giving us a false negative.

Avoid False Positives & False Negatives

As we discussed earlier, both false positive and false negative signals interrupt us, so wouldn’t it be better to avoid false positives and false negatives rather than hunting them down? In this section, we will go through some of the best practices to prevent false positives and false negatives.

  • Keep automated tests simple and minimize the logic in your code, and always remember that the test code is untested itself. The less logic you include in your test cases, the less chance of misbehavior from the test.

  • When writing test cases by hand, leverage open-source testing frameworks and libraries instead of bespoke testing frameworks and libraries. This is because open-source libraries are battle-tested and used by many companies.

  • Tests are code and subject to code review by colleagues since no one can assure that code is bug-free.

  • Change in source code should trigger a review of the companion test cases to prevent false negatives due to refactoring.

  • Software engineers should practice mutation testing before committing the code for a feature or a bug fix.

  • Your tests should only assert what matters to them. Often, redundant coverage results in false positives.

  • A dedicated environment for testing helps in reducing false positives. This environment must be only accessible by test cases.


This article covered the concept of false positive and false negative results in the field of software testing. As we discussed, false negative results is worse than a false positive since a bug stays in the code indefinitly. We introduced a technique called, mutation testing. Using mutation testing, test engineers can identify false negatives in code. Moreover, we listed some of the best practices to avoid false positive and false negative results in your tests.

We hope you found this article useful. If you have any question or suggestion, please feel free to reach out to us via our contact form or Twitter.

Recent Articles

Product Updates: August 2023

Welcome to the 3rd edition of What’s New in Testfully post! During August, we focused on the delivery of Testfully Offline, the ability to use Testfully without storing your data in the cloud. With a lot of progress on that front, we’re well on track to ship Testfully Offline in September. Also, we shipped two of the most requested features: HTTP Proxy and a new editing experience for JSON input fields.

Introducing Testfully Offline: Your Cloud-Free API Client Solution

At Testfully, innovation is our driving force. We’re excited to unveil a groundbreaking addition to our lineup: Testfully Offline, an offline version of our esteemed Saas Product in the API Client sphere. This desktop application presents an identical experience to our cloud-based offering, with a significant twist that puts the power in your hands.

Product Updates: July 2023

Welcome to the 2nd edition of the What’s New in Testfully blog post! In July, we marked the one-year anniversary of the transformative launch of Testfully’s API Client experience. To commemorate this milestone, we’ve diligently introduced 7 new features, made numerous enhancements, and conducted 10 releases. Let’s dive right in, shall we?