Wednesday, April 24, 2024
HomeCoronavirus'Wildly off-base': how did Australia get its coronavirus modelling so wrong?

‘Wildly off-base’: how did Australia get its coronavirus modelling so wrong?

When Covid-19 shifted from a disease in returned travellers to a virus that was spreading throughout the Australian community, dire predictions of deaths proliferated on social media and in the news.

In March, the deputy chief medical officer, Prof Paul Kelly, said modelling revealed the federal government was preparing for 50,000 deaths in a best-case scenario and 150,000 deaths in the worst-case scenario. In the same month, Prof Raina MacIntyre, the head of the biosecurity program at the University of New South Wales’s Kirby Institute, warned of a “flu season but on steroids”, and said “hundreds of thousands” of Australians may die under a worst-case scenario. Health workers in NSW were told to prepare for 8,000 deaths over the duration of the epidemic, and that the “first wave” of the virus could last for up to 22 weeks.

As of mid-June, 102 deaths had been recorded in Australia, with about 400 active cases in the community. So what does this mean for the reliability of models? And were the experts and media wrong to make and report on such drastic predictions?

According to the epidemiologist Gideon Meyerowitz-Katz, “modelling cannot ever capture the truth, because that is not its purpose”.

“We don’t categorise models into ‘right’ or ‘wrong’, because it’s a waste of time – they’re all going to be wrong to some degree,” he says. “We can never fully realise the complexity of human experiences with even the most complex maths, because our inputs are confined to the things we know. Imagine trying to account for every single potential transmission of Covid-19, from the casual contact of two people on public transport to the lengthy exposure in a movie theatre. Even the best, most sophisticated models only take the first steps in the tangled web of interconnectivity that we call society.”

He says the benefit of models, however, is that they provide information on a wide range of scenarios, and that the problem is governments and health officials rely on dozens of models, never just one. What the public should take away from the Covid-19 predictions is that they should distrust anyone who is certain of the future based on a model. “If a prediction is firm, solid and confident, it’s probably so wildly off-base that you can ignore it entirely,” Meyerowitz-Katz says.

“Some models may have been misleading, and some were clearly hilariously wrong, but others have provided enormous value in the fight against this virus.”

However, commentators have used Covid-19 modelling that predicted scenarios that never eventuated to suggest that models for other areas of science, including climate change, should not be trusted. Meyerowitz-Katz says such commentary revealed a fundamental misunderstanding about modelling and its uses.

“Models aren’t meant to predict the truth, they allow us to look at what might be,” he says. “The point with climate models is not that they are showing a specific increase, it’s that no matter what the inputs the outcome is disastrous long term. So there’s probably no one model that captures the exact changes that we’ll eventually see, but overall we can say with a lot of confidence that the evidence points to a worrying outcome.”

Similarly, the early Covid-19 models that looked at worst-case scenarios were wrong, but they allowed us to see what might happen if we did nothing at all to control the outbreak.

The epidemiologist Dr Kathryn Snow, from the University of Melbourne’s Centre for International Child Health, agrees that models were useful – if they came from reputable sources. She says the beginning of the pandemic saw a proliferation of non-experts making back-of-the-envelope predictions that were shared on social media and even picked up by media. This added to the confusion. Modelling is a specialised field, and few health experts are qualified to comment on them.

“Many of the most frighting predictions made early in the epidemic did not come from infectious diseases modellers themselves,” Snow says. “They didn’t account for the public health response, or the fact that a majority of our cases at that time were imported, rather than being due to local transmission. I think a lesson for the media is to distinguish clearly between evidence provided by experts in a given specialised field – in this case infectious disease epidemiology and mathematical modelling of infectious diseases – and commentary provided by people from outside that field, even if they are highly qualified experts in other topics.”

In March and April, there was still a lot that we didn’t know about Covid-19. Critically, we did not know how successful the public health strategy would be. Control efforts have thankfully proven extremely successful in Australia, and the catastrophe that is unfolding in the US and UK has so far been avoided.

“Nobody made a mistake in thinking that might happen here – it was a very real possibility, as we’ve seen elsewhere,” Snow says. “But we acted, everybody around the country played their part, and we avoided it. We’re one of the few countries in the world that has.”

Prof Hassan Vally, an infectious disease epidemiologist, says despite their limitations, models are incredibly useful to help governments plan, especially when there is limited information, such as when dealing with a new virus.

“It’s important to understand that the models projected what would have happened if we did not act decisively to limit the spread of Covid-19 in Australia, and so these give us an indication of what we averted,” he says. “In many ways the model gave us a peek into a possible future for us. This is one of those situations where if you have any doubt as to what we prevented, you only need to look as far as US, Brazil, Italy or a number of other countries that have been severely affected to see what could have happened here.

“So this is a fantastic public health success story for us in Australia as well as New Zealand. Many lives have been saved in these countries thanks to strong leadership, government listening to experts, and people making sacrifices for the good of the broader community.”

However, Prof Annette Dobson, an expert in biostatistics at the University of Queensland, says “an awful lot of rubbish was published, and not just in the news”. Medical journals have also been guilty of publishing predictions made by clinicians and researchers who had little statistical experience, she says.

“If you had statisticians looking at the data before it was published they could have picked up the flaws straight away,” she says. “The context of the data should have been checked with statisticians, and in the rush to publish, unfortunately, some things were published that were just wrong.”

On the plus side, she says, society is beginning to understand and appreciate the work of epidemiologists and biostatisticians. “People are beginning to understand what we do and why data and numbers are important, and what it’s useful for,” Dobson says.

Prof Peter Collignon, an infectious diseases physician, says “the bright side” of attention being focused on the worst-case predictions over more modest ones was people were compelled to comply with public health measures such as social distancing and hand-washing.

“We did need behaviour change,” he says. “The worry is if you overdo the predictions at the beginning people may eventually say ‘this is garbage, I won’t do anything’. This would be a worry. This virus is still here, even though numbers are low, and there will be many disruptive things we need to continue for a while to keep it at bay.”

Source link

- Advertisment -