COVID Math: So, how accurate are rapid tests?

Best guess: If you test negative, the likelihood is very high that you’re not contagious.

If you test positive, the likelihood is high that you are contagious … but there’s about a 1 in 4 chance that you got a false positive, so assume that you are contagious and re-test the next day to be sure.

Keep reading for the math and the underlying assumptions…
============
DISCLAIMER: I’m not a medical professional or scientist — just a curious, self-interested guy.  So, don’t take anything that I say or write as medical advice. Get that from your doctor!
============

In a prior post, we outlined the logic that CDC Director Walensky laid out regarding antigen rapid tests in a 2O2O paper (i.e. before she started walking the political mine field).

Her fundamental conclusion at the time:

“The antigen rapid tests are ideally suited for surveillance testing (i.e. determining if a person is contagious) since they yield positive results precisely when the infected individual is maximally infectious.”

OK, we’ll take that as our qualitative starting point.

=============

What exactly is “accuracy”?

Keying off Walensky’s conclusion (above), we’ll focus on the use of antigen rapid tests for surveillance testing (i.e. determining if a person is contagious).

In that context, antigen rapid test accuracy has two components: sensitivity and specificity:

> Sensitivitysometimes called Positive Percent Agreement (PPA) —is the probability that a contagious person’s test result is positive. When it isn’t, it’s called a false negative.

> Specificitysometimes called Negative Percent Agreement (NPA) —is the probability that a a person who is not contagious gets a negative test result. When it doesn’t, it’s called a false positive.

IMPORTANT: Keep in mind that we’re focusing on surveillance testing … whether or not a person is contagious.

For early-on diagnostic testing (i.e. whether a person may need treatment or quarantine), the above criteria would be “infected”, not “contagious” … and the answers are different.

=============

Now, let’s add some real life parameters and do the math

Johns Hopkins maintains a website that reports sensitivity and specificity for all Emergency Use Approved test kits.

For example, Abbott’s BinaxNOW — one of the most popular — is listed as scoring 84.6% on sensitivity (if contagious, the test result is positive) and  98.5% on specificity (if not contagious, the test result is negative).

That’s testing accuracy, but it’s only part of the story.

What we really care about is the tests’ predictive value.

As JHU puts it…

Positive predictive value (PPV) and negative predictive value (NPV) provide insight into how accurate the positive and negative test results are expected to be in a given population.

Predictive value is based on test accuracy and existing disease prevalence.

OK, so to calibrate predictive value,  let’s assume that Covid prevalence is 5% (i.e. 1 in 20 people that a person runs into is infected) … and plug the Abbott sensitivity and specificity numbers into the below Bayesian table.

For a detailed walk-through of a comparable Bayesian table, see our prior post: If I test positive for COVID, am I infected?     

image

The key numbers — the predictive values — are in the bottom rows of the yellow and orange boxes:

> Less than 1% of the negative test results are false negatives (the orange box)

> But, 25.2% of the positive test results are false positives (the yellow box).

=============

My take:

If a patient gets a negative test result (based on these parameters), it’s virtually certain that they’re not contagious … but they may have small traces of the virus in their system.

If a patient tests positive, there’s high likelihood (74.8%) that they’re contagious … the likelihood is higher if they are symptomatic.

But, if a person is asymptomatic and tests positive, there’s a 1 in 4 chance that they got a false positive and might not be contagious.

Before going out & about, it would make sense to take a second test to validate (or refute) the positive result.

A second positive test (taken a day or 2 later) reduces the chance of a false positive to essentially zero.

=============

IMPORTANT: These Bayesian estimates are dependent the sensitivity and specificity of the test …  and on the assumed prevalence of of the virus.

For example, if the prevalence rate jumps from 5% (1 in 20 people are contagious) to 25% (1 in 4 people are contagious) … then the positive predictive vale soars to 95% and the negative predictive value decreases to 95%.

Conversely, likelihood of a false positive drops to 5% and the likelihood of a false negative increases from near-zero to 5%

==============

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: