Reader requests: NEA school case tally, and too-sensitive tests?

by | Aug 30, 2020 | Coronavirus

I received a couple of interesting questions from readers, about pandemic-related things they’d read or heard.

Reader question #1: Can we trust the teachers’ union on coronavirus case numbers?

NPR ran a story about a new tracker of coronavirus cases in US public K-12 schools. The tracker is being managed by the National Education Association (NEA). This is potentially problematic because the NEA has a clear agenda on this issue. On their pandemic-devoted website for their members, they explicitly state their position: “That’s why, since last week, NEA has called for schools to be closed.” By posting every positive test they can find for a school employee or student nationwide, is the NEA performing a public service? Is it doing public health research? Or is it advancing an agenda?

So I took a look at the tracker here. This tally started as a volunteer effort by a teacher who searched the internet to find reports of virus-positive cases associated with K-12 schools in the US. Then people started submitting reports to her, the project grew larger, and the NEA stepped in to manage it. The “tracker” is essentially a collection of anecdotes, listing any positive tests mentioned in local news outlets, Facebook posts, school district press releases and other public sources, generally submitted by private individuals. (It’s not clear if they are posting cases that are not in the public domain, e.g., someone simply tells them that there were four cases in their school.) Cases are organized by state and school district.

As the site itself says, “this should not be taken as a comprehensive accounting of all COVID-19 cases in educational institutions.”

So is this a problem?
As a single clearinghouse for a lot of publicly available case reports, this tracker is a convenient location for interested parties to find data that is probably mostly true. It’s a tool to fight against secrecy or districts covering up cases. The problem isn’t the data on the site. The problem is the data they don’t have, and more important, how it is used or interpreted. Each case report feels shocking, and serves to reinforce the NEA’s narrative that schools are unsafe. But a quantitative context is missing. What about all the COVID-negatives? Without a denominator, we have cases, not rates. Cases reports are stories. Rates are public health data.

This is a problem with a lot of reporting on most topics, whether it’s reporting by traditional media, fringe internet media, or by your Facebook friends. There’s too much coverage of stories / case reports / anecdotes because these are interesting to us, without analysis or comparison to convey what the story means in a larger context. As consumers of information, we are presented with images and anecdotes that influence our feelings, especially our fears. For example, consider the tremendous interest that accompanied reporting about “murder hornets” a few months ago. Was the resulting fear proportional to the threat? Is this particular invasive species so much a danger to ecosystems or the economy that it deserved this amount of emotional response compared to, say, invasive nutria or the glassy-winged sharpshooter? Maybe, maybe not. But the story and images of Asian giant hornets are so compelling to us that any rational discussion of the species’ relative importance is lost.

As humans we are story-driven, not number-driven. We can’t process quantitative risk very well. We process stories. Typically news stories are sensational precisely because they are uncommon or rare events. The frequency of an event (such as an urban riot, or a crime committed by an illegal immigrant, or a tornado, or a vaccine adverse event, or whatever) has much less influence on our fears and opinions than anecdotes do. The images we see, the obituaries we read, the movies we watch, the reports we hear determine what we’re afraid of, even if what we learn to fear is exceedingly rare. Bill Gates likes to ask, “What’s the deadliest animal in the world?” People are inclined to answer “sharks” (thanks, Jaws) or “snakes” (thanks, Book of Genesis and most horror movies) or even “humans.” The correct answer? Mosquitoes. Sharks kill about ten people per year; dogs kill about 25,000 thanks to rabies; and mosquitoes kill about 725,000 per year thanks to malaria and other mosquito-borne diseases. Our fears are completely disproportionate to the threats.

Every time you engage with your preferred news source(s) you expose yourself to stories that will shape your fears. By definition, your fears are not wholly rational. If they were, then we would generally all agree on a rank order of the issues we should be most concerned about. If you tell me what issues frighten you the most, I can tell you which tribe you belong to. The media you choose to consume both reflects and drives your fears, and therefore your political opinions.

If you are immersed in stories about urban crime, you’ll fear going into the city even if crime rates are at an all-time low.

If you are immersed in stories about positive coronavirus tests in schools, you’ll be afraid to let your kids go back to class even if COVID rates are declining or low.

So here’s my point. There’s nothing inherently wrong with what the NEA is doing with the tracker. But the tracker is a collection of anecdotes, not a contextual assessment of the rate of infection in schools vs the surrounding community, or the trends, or the likelihood of hospitalizations, or even whether the positive cases are new vs old infections (see Question #2). The NEA’s information can be true, and useful, and also serve an agenda that favors fear over quantitative risk assessment.

Reader question #2: Are the PCR tests used to detect SARS-CoV-2 too sensitive?

Back to more technical stuff. Another reader asked me to comment in simple language on a New York Times article entitled “Your Coronavirus Test Is Positive. Maybe It Shouldn’t Be.” Here’s my summary.

The most widely used test for coronavirus uses a technology called RT-PCR. PCR looks for viral RNA, not live virus directly. PCR is extremely sensitive and can detect a broken fragment of a viral genome, whether there is any live virus present or not. This is useful for answering the question, is there any viral RNA present in this sample? However it tells us very little about the important practical question, “Is this person infectious? Can they spread the virus to another person?”

As you’ve surely heard, a person can test positive for weeks after symptoms abate. Almost certainly such people are not contagious to others. In that sense, their positive test is true, but misleading. It’s “too sensitive.” These people may have fragments of dead viral RNA in their nose but they do not need to be isolated.

To control the pandemic, we really need to identify people who are infectious, not everyone who has a bit of viral RNA on board.

The article suggests a change in our testing regime to accomplish this. Rather than use a highly sensitive, slow test like PCR, we should mass-deploy a less sensitive, fast test. Such a test would “ignore” people who have only a tiny amount of viral RNA and flag those who have a lot–and therefore are most likely to be contagious.

But what about people who are newly infected, and have only a little bit of viral RNA because the virus has just started multiplying in their bodies?

We believe that people are most contagious in the day or two before they show symptoms, and for about five days after. If we miss these early infections because the test was not sensitive enough, that’s a problem. The solution? Frequent, repeated testing. That way when a person’s viral load passes the threshold for detection, and they are more contagious, their test will be positive. In theory this should let you catch infections early without tagging a bunch of harmless people as “positive.”

So there is no single best approach to testing. How you should test depends on what questions you want to answer. High sensitivity PCR to detect even the tiniest amount of viral RNA could be used to track incidence and the epidemiologic spread of coronavirus in a community (how many cases do we have? is the number changing over time?). A lower sensitivity test could be used to make individual decisions about whether a person should isolate or not.

Amy Rogers, MD, PhD, is a Harvard-educated scientist, novelist, journalist, and educator. Learn more about Amy’s science thriller novels, or download a free ebook on the scientific backstory of SARS-CoV-2 and emerging infections, at AmyRogers.com.

Sign up for my email list

0 Comments

Share this:

Get a free short story by Amy Rogers

Join my mailing list and you'll get my latest short story, "The Diggins." In the Gold Country of Northern California, a bioprospector makes an unwelcome discovery.

Thank you! You should receive a download link for the story. Any questions or problems, shoot me an email at amy@AmyRogers.com

Pin It on Pinterest