**Best guess: **If you test negative, the likelihood is very high that you’re not contagious.

If you test positive, the likelihood is high that you are contagious … but there’s about a 1 in 4 chance that you got a false positive, so assume that you are contagious and re-test the next day to be sure.

*Keep reading for the math and the underlying assumptions…
*============

**DISCLAIMER:**I’m not a medical professional or scientist — just a curious, self-interested guy. So, don’t take anything that I say or write as medical advice. Get that from your doctor!

============

In a prior post, we outlined the logic that CDC Director Walensky laid out regarding antigen rapid tests in a 2O2O paper (i.e. before she started walking the political mine field).

Her fundamental conclusion at the time:

“The antigen rapid tests are ideally suited forsurveillance testing(i.e. determining if a person is contagious) since they yield positive results precisely when the infected individual is maximally infectious.”

OK, we’ll take that as our qualitative starting point.

**=============**

__What exactly is “accuracy”?__

Keying off Walensky’s conclusion (above), we’ll focus on the use of antigen rapid tests for __surveillance testing__ (i.e. determining if a person is contagious).

In that context, antigen rapid test accuracy has two components: __sensitivity__ and __specificity__:

> __Sensitivity__ — *sometimes called Positive Percent Agreement (PPA) —*is the probability that a __contagious__ person’s test result is __positive__. When it isn’t, it’s called a false negative.

> __Specificity__ — *sometimes called Negative Percent Agreement (NPA) —*is the probability that a a person who is __not contagious__ gets a __negative__ test result. When it doesn’t, it’s called a false positive.

IMPORTANT: Keep in mind that we’re focusing on surveillance testing … whether or not a person is contagious.

For early-on diagnostic testing (i.e. whether a person may need treatment or quarantine), the above criteria would be “infected”, not “contagious” … and the answers are different.

=============

__Now, let’s add some real life parameters and do the math__…

Johns Hopkins maintains a website that reports sensitivity and specificity for all Emergency Use Approved test kits.

For example, Abbott’s BinaxNOW — one of the most popular — is listed as scoring ** 84.6%** on

**(if contagious, the test result is positive) and**

__sensitivity____on__

**98.5%****(if not contagious, the test result is negative).**

__specificity__That’s testing accuracy, but it’s only part of the story.

What we really care about is the tests’ __predictive value__.

As JHU puts it…

Positive predictive value (PPV) and negative predictive value (NPV) provide insight intohow accurate the positive and negative test results are expected to be in a given population.

Predictive value is based ontest accuracyandexisting disease prevalence.

OK, so to calibrate predictive value, let’s assume that Covid __ prevalence__ is

__5%__*(i.e. 1 in 20 people that a person runs into is infected)*… and plug the Abbott sensitivity and specificity numbers into the below Bayesian table.

For a detailed walk-through of a comparable Bayesian table, see our prior post: If I test positive for COVID, am I infected?

The key numbers — the predictive values — are in the bottom rows of the yellow and orange boxes:

> **Less than 1%** of the negative test results are **false negatives** (*the orange box)*

> But, **25.2%** of the positive test results are **false positives** (*the yellow box).*

=============

__ My take__:

If a patient gets a negative test result (based on these parameters), it’s virtually certain that they’re not contagious … but they may have small traces of the virus in their system.

If a patient tests positive, there’s high likelihood (**74.8%**) that they’re contagious … the likelihood is higher if they are symptomatic.

But, if a person is __asymptomatic__ and tests positive, there’s a **1 in 4 chance** that they got a **false positive** and might not be contagious.

Before going out & about, it would make sense to take a second test to validate (or refute) the positive result.

**A second positive test (taken a day or 2 later) reduces the chance of a false positive to essentially zero.**

=============

** IMPORTANT**: These Bayesian estimates are dependent the sensitivity and specificity of the test …

__and on the assumed prevalence of of the virus__.

For example, if the prevalence rate jumps from 5% (1 in 20 people are contagious) to 25% (1 in 4 people are contagious) … then the positive predictive vale soars to 95% and the negative predictive value decreases to 95%.

Conversely, likelihood of a false positive drops to 5% and the likelihood of a false negative increases from near-zero to 5%

==============