What does a positive RAT result mean?
It depends on prevalence. A new study shows false positives are very unlikely.
Welcome to Plugging the Gap (my email newsletter about Covid-19 and its economics). In case you don’t know me, I’m an economist and professor at the University of Toronto. I have written lots of books including, most recently, on Covid-19. You can follow me on Twitter (@joshgans) or subscribe to this email newsletter here. (I am also part of the CDL Rapid Screening Consortium. The views expressed here are my own and should not be taken as representing organisations I work for.)
Last week, the first major study arising out of the CDL Rapid Screening Consortium was published in the Journal of the American Medical Association (JAMA). There’s clearly massive interest in it as it has been viewed over 300,000 times (twice as many times as the second most viewed article on JAMA in the last month).
The study took our data from January 2021 up until 13th October 2021 (so no Omicron) which covered 903,408 rapid antigen tests (RAT) across organisations all over Canada. In that time, we recorded 1322 positive results (that is, 0.15% of all tests) of which 1103 had PCR follow-ups recorded. There were 462 false positives amongst them. This is actually a very low number implying these tests have very high specificity. But there has been much confusion over understanding that point so I figured I should write a little explainer here. The JAMA piece had a strict word limit and so didn’t leave room for anything other than a naked reporting of the results.
Many people have taken 462 false positives out of 1103 positives and calculated that, if you received a positive on a RAT, there was a 42% chance you were not positive at all. That sounds bad but you have to ask was the reason that the tests were bad or because there just wasn’t that much Covid-19 out there in the population over that time period?
There is some evidence that the tests may have been bad. As the paper reported 278 of the false positives were from a single batch of Abbott PanBio distributed across two workplaces. Even so, having a bad batch was a rare occurrence which is the important point here.
The background prevalence of Covid-19 was low. In our data, positive results actually tracked that nicely.
The low prevalence is important. If only 1 percent of the population had Covid-19, we would expect, naturally, 1 percent of the tests to be true positives. Similarly, if you had a RAT that had a 99 specificity (meaning that out of any random person taking a test, there was a 1 percent chance they would be given a positive outcome when they were not positive), then out of all positive results, half would be false positives.
By contrast, if there was 10 percent of the population with Covid-19, 10 percent of the tests would be true positives but 1 percent would still be false positives. Now, however, only 10 percent of all positive tests would be false positives.
Our study was conducted when prevalence was in the low (1%) level. Thus, finding that only 42% of positive results were false indicates that specificity of the test was very high (above 99 percent). That means that when you take a test during the Omicron wave, especially if you have symptoms but even if you don’t, there is a less than 10 percent chance that your positive result would be false. In other words, the overwhelming likelihood is that you have Covid. These are very good tests.
Thus, neither when prevalence was low nor when it was high was there a possibility that giving people RATs would cause massive numbers of people to be forced to isolate needlessly. Even in our system, isolation lasted a day until the PCR result cleared up the issue. But even if it did not, the isolation costs would be low. To be sure, there would be more isolation than if you didn’t test but remember we picked up the true positive needles in the haystack and broke chains of transmission in the workplace. Had those occurred there would have been many more isolations as workplaces would have been forced to close.