The hard lessons of modeling the Covid-19 pandemic
For a few months last year, Nigel Goldenfeld and Sergei Maslov, a pair of physicists at the University of Illinois, Urbana-Champaign, were unlikely celebrities in their state’s COVID-19 pandemic response — that is, until everything went wrong.
Their typical areas of research include building models for condensed matter physics, viral evolution and population dynamics, not epidemiology or public health. But like many scientists from diverse fields, they joined the COVID-19 modeling effort in March, when the response in the United States was a whirlwind of “all hands on deck” activity in the absence of any real national direction or established testing protocols. From the country’s leaders down to local officials, everyone was clamoring for models that could help them cope with the erupting pandemic — hoping for something that could tell them precisely how bad it would get, and how quickly, and what they should do to head off disaster.
In those early months of 2020, Goldenfeld and Maslov were showered with positive press coverage for their COVID-19 modeling work. Their models helped prompt their university to shut its campus down quickly in the spring and transition to online-only education. Shortly thereafter, theirs was one of several modeling groups recruited to report their results to the office of the governor of Illinois.
So when Goldenfeld, Maslov and the rest of their research team built a new model to guide their university’s reopening process, confidence in it ran high. They had accounted for various ways students might interact — studying, eating, relaxing, partying and so on — in assorted locations. They had estimated how well on-campus testing and isolation services might work. They had considered what percentage of the student body might never show symptoms while spreading the virus. For all those factors and more, they had cushioned against wide ranges of potential uncertainties and hypothetical scenarios. They had even built in an additional layer of detail that most school reopening models did not have, by representing the physics of aerosol spread: how many virus particles a student might emit when talking through a mask in a classroom, or when drinking and yelling above the music at a crowded bar.
Following the model’s guidance, the University of Illinois formulated a plan. It would test all its students for the coronavirus twice a week, require the use of masks, and implement other logistical considerations and controls, including an effective contact-tracing system and an exposure-notification phone app. The math suggested that this combination of policies would be sufficient to allow in-person instruction to resume without touching off exponential spread of the virus.
But on September 3, just one week into its fall semester, the university faced a bleak reality. Nearly 800 of its students had tested positive for the coronavirus — more than the model had projected by Thanksgiving. Administrators had to issue an immediate campus-wide suspension of nonessential activities.
What had gone wrong? The scientists had seemingly included so much room for error, so many contingencies for how students might behave. “What we didn’t anticipate was that they would break the law,” Goldenfeld said — that some students, even after testing positive and being told to quarantine, would attend parties anyway. This turned out to be critical: Given how COVID-19 spreads, even if only a few students went against the rules, the infection rate could explode.
Critics were quick to attack Goldenfeld and Maslov, accusing them of arrogance and failing to stay in their lane — “a consensus [that] was very hard to beat, even when the people who are recognized experts chimed in in our defense,” Maslov said.
Meanwhile, the University of Illinois was far from unique. Many colleges across the country were forced to reckon with similar divergences between what their own models said might happen and what actually did happen — divergences later attributed to a wide-ranging assortment of reasons.
Such events have underscored a harsh reality: As incredibly useful and important as epidemiological models are, they’re an imperfect tool, sensitive to the data they employ and the assumptions they’re built on. And because of that sensitivity, their intended aims and uses often get misunderstood. [Continue reading…]