In June a well-known climate scientist opined on Twitter that “I’d be more likely to believe the COVID lab-leak hypothesis if the people pushing it weren’t largely the same people pushing [bogus] conspiracy theories about the 2020 presidential election and climate change.” He has a point. Early in the pandemic, the lab-leak theory was promoted by then president Donald Trump, who was dismissive of masks and social distancing. He speculated that COVID-19 infections might be effectively treated by irradiating sensitive lung tissues with ultraviolet light, using untested and possibly unsafe drugs, or injecting dangerous household cleansers. Trump also knowingly put his own Secret Service detail at risk by riding with them in a closed car while he was fighting an active infection, misrepresented hurricane forecasts and advanced misleading ideas about vaccine safety. And—most egregiously for climate scientists—he repeated the ridiculous claim that climate change was a hoax. We all judge messages by the messenger. If our trust (or lack of it) is grounded in experience, this pattern is rational: we would be foolish to trust someone who in the past has repeatedly misled us, been mistaken or given us bad advice. We wouldn’t go back to a doctor who had misdiagnosed a serious disease or a car mechanic who had cheated us. We wouldn’t stick with a financial adviser whose stock tips had consistently proved wrong. To be clear, most scientists think animal spillover is the most likely explanation because that’s where most new diseases come from. True, the source animal has not yet been identified, but it took decades to determine that HIV was derived from primates. True, there is a lab in Wuhan that studies bat viruses, but it’s typical for scientists to study viruses endemic to their regions. And blaming humans for disease is as old as disease itself. But what do we do when evidence suggests that a claim might be right, even if the person making it has been repeatedly wrong? Here it’s helpful to distinguish between two forms of the lab-leak theory: the malevolent and the accidental. The malevolent version holds that China deliberately released the virus. I know of no credible scientists who embrace that idea, and it strikes me as unlikely because politicians with even the most meager understanding of pandemics would realize that any deliberately released virus would affect China as much as or more than the countries to which they hoped to spread it. The accidental version holds that the virus got out by mistake. Here things get trickier but more plausible. Even institutions that take great safety precautions still sometimes fail. Just think about the nuclear power industry, where serious accidents have occurred in Japan, the Soviet Union, the U.K., the U.S., Canada, France, Belgium, Sweden and Argentina and minor or moderate accidents in most countries where nuclear power is used. Or consider railroads, where major accidents still occur every year; the most deadly accident in U.S. railroad history occurred in Tennessee in 1918, almost 100 years after the industry got started. The late Yale University sociologist Charles Perrow developed the theory of “normal accidents” to explain this phenomenon. People are human. We all make mistakes. Fortunately, our mistakes are often minor and can be easily corrected. But in complex technological systems, small mistakes may rapidly ramify and compound into large problems. When people don’t know how to fix their mistakes—and are perhaps embarrassed or ashamed—they may try to cover them up, impeding the ability of those around them to fix the problem, too. It’s not hard to imagine that a COVID researcher made a small mistake and tried to hide it, and things then spiraled out of control. It doesn’t mean this is what happened, but it does mean we should keep an open mind until we know more. The lab-leak theory is plausible, and it is rational for scientific institutions to investigate closely, even if some of the people promoting the claim are irrational. Life is short, research is expensive and not every theory is worth pursuing. But when the stakes are high, it generally behooves scientists to look closely at any idea that has not yet been properly evaluated. If there is credible evidence that the SARS-CoV-2 virus may have escaped from a lab—in China or anywhere else—that evidence should be evaluated, even if we first heard the message from an untrustworthy messenger.
We all judge messages by the messenger. If our trust (or lack of it) is grounded in experience, this pattern is rational: we would be foolish to trust someone who in the past has repeatedly misled us, been mistaken or given us bad advice. We wouldn’t go back to a doctor who had misdiagnosed a serious disease or a car mechanic who had cheated us. We wouldn’t stick with a financial adviser whose stock tips had consistently proved wrong.
To be clear, most scientists think animal spillover is the most likely explanation because that’s where most new diseases come from. True, the source animal has not yet been identified, but it took decades to determine that HIV was derived from primates. True, there is a lab in Wuhan that studies bat viruses, but it’s typical for scientists to study viruses endemic to their regions. And blaming humans for disease is as old as disease itself.
But what do we do when evidence suggests that a claim might be right, even if the person making it has been repeatedly wrong? Here it’s helpful to distinguish between two forms of the lab-leak theory: the malevolent and the accidental. The malevolent version holds that China deliberately released the virus. I know of no credible scientists who embrace that idea, and it strikes me as unlikely because politicians with even the most meager understanding of pandemics would realize that any deliberately released virus would affect China as much as or more than the countries to which they hoped to spread it.
The accidental version holds that the virus got out by mistake. Here things get trickier but more plausible. Even institutions that take great safety precautions still sometimes fail. Just think about the nuclear power industry, where serious accidents have occurred in Japan, the Soviet Union, the U.K., the U.S., Canada, France, Belgium, Sweden and Argentina and minor or moderate accidents in most countries where nuclear power is used. Or consider railroads, where major accidents still occur every year; the most deadly accident in U.S. railroad history occurred in Tennessee in 1918, almost 100 years after the industry got started.
The late Yale University sociologist Charles Perrow developed the theory of “normal accidents” to explain this phenomenon. People are human. We all make mistakes. Fortunately, our mistakes are often minor and can be easily corrected. But in complex technological systems, small mistakes may rapidly ramify and compound into large problems. When people don’t know how to fix their mistakes—and are perhaps embarrassed or ashamed—they may try to cover them up, impeding the ability of those around them to fix the problem, too.
It’s not hard to imagine that a COVID researcher made a small mistake and tried to hide it, and things then spiraled out of control. It doesn’t mean this is what happened, but it does mean we should keep an open mind until we know more. The lab-leak theory is plausible, and it is rational for scientific institutions to investigate closely, even if some of the people promoting the claim are irrational.
Life is short, research is expensive and not every theory is worth pursuing. But when the stakes are high, it generally behooves scientists to look closely at any idea that has not yet been properly evaluated. If there is credible evidence that the SARS-CoV-2 virus may have escaped from a lab—in China or anywhere else—that evidence should be evaluated, even if we first heard the message from an untrustworthy messenger.