Whether we are constantly checking the number of new infections, tracking the progress of vaccine trials or “anxiety scrolling” through Twitter, the news surrounding the COVID-19 pandemic can be overwhelming. Sorting the good information from the bad and putting each day’s developments into context are not easy. Carl Bergstrom, a professor of biology at the University of Washington, is an expert on how information flows in science and society. He and his University of Washington colleague Jevin West teach a course on data reasoning in the digital world (its materials are available online). They have also written a book based on the course, Calling Bullshit: The Art of Skepticism in a Data-Driven World, which is set to be published this Tuesday. Bergstrom has monitored the pandemic closely, sharing frequent updates on Twitter and countering disinformation. Scientific American spoke with him about his tool kit for navigating the daily deluge of news about the novel coronavirus, from finding reliable sources to interpreting reporting about preprint research. [An edited transcript of the interview follows.] What tips do you have for dealing with the overwhelming amount of coronavirus information and engaging with it in the healthiest way possible? High-quality information doesn’t have a whole lot to do with timeliness on the scale of minutes to hours. It has everything to do with how well that information has been vetted and triangulated and presented. What I encourage people to do in a crisis like this one is to slow down and [read] a newspaper story that was posted 12 hours ago—or 18 hours ago or 36 hours ago—that was written by a professional reporter who’s been covering infectious disease for years [and] who has talked to a bunch of experts to synthesize what’s going on and interpret things and put [them] into context. I encourage people to turn to their trusted traditional media sources rather than turning to Twitter or Facebook or WhatsApp, because when you do that, you do get information that’s a little bit more recent, but the quality of that information is far, far lower. You’re very susceptible to whatever rumors go spreading out across the Internet, and that can be a big problem. How would you recommend finding those better news sources? For me, it’s all about individual reporters. For example, I think that Helen Branswell does the best reporting of anybody around COVID-19. She’s been doing infectious diseases for the past 20-some years. She understands the entire picture and does a brilliant job of presenting it. I think it’s a matter of finding those voices that you trust and then relying on those voices. Scientists’ understanding of coronavirus is constantly changing. Frequently, things that seemed true a few months ago are now known to be false. In this situation, how can we tell if something is good information or misinformation? The first thing to recognize [is that] because the science changes, the advice that you get from health professionals changes over time as well. You’ll see people saying, “Well, you can’t trust [National Institute of Allergy and Infectious Diseases director Anthony] Fauci, because he was saying one thing in February, and he’s saying something else in July.” This is completely backwards. The people you can’t trust are the ones who have not changed their views and advice, despite having enormously more evidence. The ones who are changing their views and advice, based on evidence, are the ones who are doing science and the ones who are giving good recommendations. In terms of how you actually sort out misinformation, it’s important to look at the sources of the information. Maybe someone tweets that there’s this paper out, and that links to a newspaper story. Well, go back to the newspaper story. And then the newspaper story might link to the original paper. Go back to the original paper. Triangulating is another really important thing. If there’s a claim that’s out there, make sure that that claim is being made by multiple venues—and [that it is] not only tweeted by multiple accounts but is actually coming from different people. If something seems too good or too bad to be true, it probably is. In Calling Bullshit, you talk about ways to spot when true data are being used misleadingly. Can you give an example of a tool described in the book that helped you identify such misleading information in COVID-19 news? Selection bias happens when you sample from some population, and then you draw conclusions about a different population, and the sample that you looked at isn’t really representative of the population that you’re drawing conclusions about. Early on in the pandemic, there were a couple of doctors from Bakersfield, [Calif.], that were trying to estimate the prevalence of the disease in California. They looked at the fraction of the patients coming to their urgent care clinics who had the coronavirus, and they found that this fraction was fairly high. And then they [essentially] said, “Okay, well, that gives us an estimate of the fraction in California that have the coronavirus.” They just assumed that [the prevalence for] all of California [could be extrapolated from] the people coming to their clinics. But, of course, this is a completely unreasonable assumption in the middle of a pandemic. If you’re the clinic in town that has the tests, a large fraction of the people coming into your clinic believe that they have coronavirus. Otherwise they would not be coming in. That assessment makes a lot of sense. But it’s easy to forget about such details if you’re just reading the headline and moving on. Yeah, definitely. I think there’s a lot of motivated reasoning as well. One thing we really stress [in the book] is to try to avoid confirmation bias. Be just as skeptical of ideas that confirm your beliefs and desires as those that challenge your beliefs and desires. That’s a very hard thing to do. I fall into that trap, and I’m constantly challenging myself to do a better job of avoiding confirmation bias. But it is something we’re all susceptible to. With the novel coronavirus, there has also been a lot of reporting on preprint papers. These are studies that have not yet been peer-reviewed but have been made publicly available online. What is the best way to interpret news reports about preprints? There’s not a complete difference of kind between a paper that’s been peer-reviewed and a paper that’s in a preprint archive, though the peer-reviewed ones have a higher probability of being both interesting and correct. You have to look at [preprints] as an earlier view of the scientific conversation than you’re usually getting. A lot of the discussion that would typically go on in the academic community, [which] would not necessarily be accessible to the public, all got shifted onto Twitter and PubPeer and other online sites. For people who want to track the science and see how science is working, it’s really quite an exciting opportunity. The downside is that it is easy to be misled by results that haven’t been properly vetted. But I think the bigger danger is the fact that this entire pandemic has been so politicized that when a result is posted in a journal or on a preprint server, that result falls on the end of some spectrum. As soon as the paper comes out, whatever side that paper supports picks that paper up and uses it to beat the other side with. Both sides often are selectively cherry-picking from the results that favor them. It comes back, again, to finding these trusted sources. You want to find sources that are not trying to promote one particular political narrative around the disease. Read more about the coronavirus outbreak from Scientific American here. And read coverage from our international network of magazines here.
Carl Bergstrom, a professor of biology at the University of Washington, is an expert on how information flows in science and society. He and his University of Washington colleague Jevin West teach a course on data reasoning in the digital world (its materials are available online). They have also written a book based on the course, Calling Bullshit: The Art of Skepticism in a Data-Driven World, which is set to be published this Tuesday. Bergstrom has monitored the pandemic closely, sharing frequent updates on Twitter and countering disinformation. Scientific American spoke with him about his tool kit for navigating the daily deluge of news about the novel coronavirus, from finding reliable sources to interpreting reporting about preprint research.
[An edited transcript of the interview follows.]
What tips do you have for dealing with the overwhelming amount of coronavirus information and engaging with it in the healthiest way possible?
High-quality information doesn’t have a whole lot to do with timeliness on the scale of minutes to hours. It has everything to do with how well that information has been vetted and triangulated and presented. What I encourage people to do in a crisis like this one is to slow down and [read] a newspaper story that was posted 12 hours ago—or 18 hours ago or 36 hours ago—that was written by a professional reporter who’s been covering infectious disease for years [and] who has talked to a bunch of experts to synthesize what’s going on and interpret things and put [them] into context.
I encourage people to turn to their trusted traditional media sources rather than turning to Twitter or Facebook or WhatsApp, because when you do that, you do get information that’s a little bit more recent, but the quality of that information is far, far lower. You’re very susceptible to whatever rumors go spreading out across the Internet, and that can be a big problem.
How would you recommend finding those better news sources?
For me, it’s all about individual reporters. For example, I think that Helen Branswell does the best reporting of anybody around COVID-19. She’s been doing infectious diseases for the past 20-some years. She understands the entire picture and does a brilliant job of presenting it. I think it’s a matter of finding those voices that you trust and then relying on those voices.
Scientists’ understanding of coronavirus is constantly changing. Frequently, things that seemed true a few months ago are now known to be false. In this situation, how can we tell if something is good information or misinformation?
The first thing to recognize [is that] because the science changes, the advice that you get from health professionals changes over time as well. You’ll see people saying, “Well, you can’t trust [National Institute of Allergy and Infectious Diseases director Anthony] Fauci, because he was saying one thing in February, and he’s saying something else in July.” This is completely backwards. The people you can’t trust are the ones who have not changed their views and advice, despite having enormously more evidence. The ones who are changing their views and advice, based on evidence, are the ones who are doing science and the ones who are giving good recommendations.
In terms of how you actually sort out misinformation, it’s important to look at the sources of the information. Maybe someone tweets that there’s this paper out, and that links to a newspaper story. Well, go back to the newspaper story. And then the newspaper story might link to the original paper. Go back to the original paper. Triangulating is another really important thing. If there’s a claim that’s out there, make sure that that claim is being made by multiple venues—and [that it is] not only tweeted by multiple accounts but is actually coming from different people. If something seems too good or too bad to be true, it probably is.
In Calling Bullshit, you talk about ways to spot when true data are being used misleadingly. Can you give an example of a tool described in the book that helped you identify such misleading information in COVID-19 news?
Selection bias happens when you sample from some population, and then you draw conclusions about a different population, and the sample that you looked at isn’t really representative of the population that you’re drawing conclusions about. Early on in the pandemic, there were a couple of doctors from Bakersfield, [Calif.], that were trying to estimate the prevalence of the disease in California. They looked at the fraction of the patients coming to their urgent care clinics who had the coronavirus, and they found that this fraction was fairly high. And then they [essentially] said, “Okay, well, that gives us an estimate of the fraction in California that have the coronavirus.” They just assumed that [the prevalence for] all of California [could be extrapolated from] the people coming to their clinics. But, of course, this is a completely unreasonable assumption in the middle of a pandemic. If you’re the clinic in town that has the tests, a large fraction of the people coming into your clinic believe that they have coronavirus. Otherwise they would not be coming in.
That assessment makes a lot of sense. But it’s easy to forget about such details if you’re just reading the headline and moving on.
Yeah, definitely. I think there’s a lot of motivated reasoning as well. One thing we really stress [in the book] is to try to avoid confirmation bias. Be just as skeptical of ideas that confirm your beliefs and desires as those that challenge your beliefs and desires. That’s a very hard thing to do. I fall into that trap, and I’m constantly challenging myself to do a better job of avoiding confirmation bias. But it is something we’re all susceptible to.
With the novel coronavirus, there has also been a lot of reporting on preprint papers. These are studies that have not yet been peer-reviewed but have been made publicly available online. What is the best way to interpret news reports about preprints?
There’s not a complete difference of kind between a paper that’s been peer-reviewed and a paper that’s in a preprint archive, though the peer-reviewed ones have a higher probability of being both interesting and correct. You have to look at [preprints] as an earlier view of the scientific conversation than you’re usually getting. A lot of the discussion that would typically go on in the academic community, [which] would not necessarily be accessible to the public, all got shifted onto Twitter and PubPeer and other online sites. For people who want to track the science and see how science is working, it’s really quite an exciting opportunity. The downside is that it is easy to be misled by results that haven’t been properly vetted.
But I think the bigger danger is the fact that this entire pandemic has been so politicized that when a result is posted in a journal or on a preprint server, that result falls on the end of some spectrum. As soon as the paper comes out, whatever side that paper supports picks that paper up and uses it to beat the other side with. Both sides often are selectively cherry-picking from the results that favor them. It comes back, again, to finding these trusted sources. You want to find sources that are not trying to promote one particular political narrative around the disease.
Read more about the coronavirus outbreak from Scientific American here. And read coverage from our international network of magazines here.