Category Archives: Covid

Boris Johnson’s massive maths mistake over Covid deaths is an embarrassment

This post is adapted from my Indy Voice article of the same title originally published on 02/03/23

Yesterday many of us woke up to read headlines about Matt Hancock having ignored scientific advice about protecting care homes in the early stages of the pandemic (though a spokesperson for the former health secretary has since said that the reports are “flat wrong”, and that the interpretation of the messages’ contents is categorically untrue).

The story arose from a cache of over 100,000 WhatsApp messages that had been leaked to the Daily Telegraph.

Buried in WhatsApp conversations between then-prime minister Boris Johnson, his scientific advisers and Dominic Cummings, is an exchange which is arguably even more worrying than this headline-grabbing story.

On 26 August 2020, Johnson asked the group:

“What is the mortality rate of Covid? I have just read somewhere that it has fallen to 0.04 per cent from 0.1 per cent.”

He goes on to calculate that with this “mortality rate” if everyone in the UK were to be infected this would lead to only 33,000 deaths. He suggests that since the UK had already suffered 41,000 deaths at that point, this might be why the death rate is coming down – because “Covid is starting to run out of potential victims”.

In fact, death rates were still coming down as a result of the earlier fall in the number of cases brought about by lockdown. Already though, by the time this conversation took place cases were rising again in the early stages of what would become a catastrophic second wave.

Based on his faulty maths, Johnson questioned “How can we possibly justify the continuing paralysis to control a disease that has a death rate of one in 2,000?”. He was suggesting that anti-Covid mitigations could be relaxed at perhaps the worst possible time. His whole argument was based on two fundamental misunderstandings.

His first mistake was a mathematical one. Johnson had seen the figure 0.04 in the Financial Times and interpreted it as a percentage. In fact it was a fraction – the number of people who were dying of Covid-19 divided by the number of people testing positive. This is known as the case fatality ratio (CFR).

At 0.04 (or 4 in 100), the CFR calculated by the Financial Times was 100 times larger than Johnson had suggested – it was actually four per cent, not 0.04 per cent as he believed.

The chief scientific adviser Patrick Vallance patiently explained this to Johnson: “It seems that the FT figure is 0.04 (ie four per cent, not 0.04 per cent)”. Johnson replied “Eh? So what is 0.04 if it is not a percentage?” at which point Dominic Cummings had to jump in and break it down into even simpler terms. Even then the messages show no acknowledgement from Johnson that he had understood.

The other mistake that Johnson made in his calculation was to confuse the case fatality ratio with the infection fatality ratio (IFR). The IFR is the number of people who die from Covid-19 as a proportion of those who get infected.

Though they may sound similar, there is a big difference between the CFR and the IFR. In the CFR we divide the number of Covid deaths by the number of people who test positive. However, in the IFR we divide by the number of infected people.

Early on in the pandemic, when testing was not readily available, the number of people who tested positive was much lower than the number of people who were actually infected with the disease. Because of this, the CFR overestimated the IFR.

By mixing up percentages and proportions, Johnson’s calculation actually underestimated what the figure should have been by a factor of 100. If he had had the CFR correct he would have come to a very different conclusion – that over 3 million people in the UK would die.

In reality, to do this calculation you need the IFR, not the CFR. With a 1 per cent IFR (closer to the true figure), the correct version of Johnson’s simplistic calculation would suggest that 660,000 people might have died in the UK if everyone became infected – 20 times more than Johnson’s mistaken numbers suggested.

It is almost unimaginable that the leader of the United Kingdom could allow his thinking to be informed by calculations which contained such rudimentary errors. Mistaking the CFR and the IFR would perhaps have been understandable in the early stages of the pandemic, but this conversation took place long after the first wave had subsided.

To have made this mistake over six months into the UK’s pandemic response is indicative of a leader who has failed to fully engage with even the most basic science required to make important decisions surrounding the pandemic. It perhaps explains Johnson’s reluctance to institute stronger mitigations in the autumn of 2020 as were called for by his own scientific advisors.

Even less forgivable is his mathematical mistake, which is indicative of his failure to engage in scientific thinking more generally. At a time when other country’s leaders were going on national television and defining important epidemiological concepts the masses, we endured a prime minister who was making basic mathematical errors the likes of which most 11-year-olds would not succumb to.

When it came to scientific literacy – such a crucial currency in the response to the pandemic – this incident suggests we suffered under the worst possible leader at the worst possible time.

Here’s what we can do to prepare for the next pandemic

This post is adapted from my Indy Voices article of the same title originally published on 23/06/23

Phase one of the Covid inquiry has begun in earnest. Its remit is pandemic preparedness.

Nominally the guiding ethos of the inquiry is not to lay the blame at anyone’s door, but to learn lessons from the mistakes we made in preparing for the Covid pandemic so that we are better prepared for the next one.

The inquiry shares this premise with my book How to Expect the Unexpected. My focus in the book is to highlight the mistakes people have made when making predictions in the past, to identify the root causes of these failures and to suggest strategies that mean they are not revisited in the future or, better still, are rendered completely unrepeatable.

When it comes to thinking about the future, every plan we make represents a wager against the world’s uncertainties. Preparation is no different. The degree to which we prepare represents the trade-off between what we are willing to sacrifice now to hedge our bets against the vagaries of the future.

In thinking about pandemic preparedness, we can draw lessons from other areas of disaster emergency planning. In the UK, for example, we don’t routinely prepare for earthquakes because the chances of experiencing large-magnitude earthquakes in this country are extremely low. In contrast, Japan routinely spends upwards of three per cent of its annual budget on disaster risk management. Since strengthening disaster preparedness began in Japan in the late 1950s, average annual deaths have been reduced from the thousands to the low hundreds.

In How to Expect the Unexpected, I argue that although it is impossible to predict exactly when the next earthquake will hit, seismologists are confidently able to predict the frequencies of earthquakes in shock-prone areas and we are able to use that knowledge to help us to prepare and prioritise resources.

Public health scientists had been telling the same sorts of stories as the seismologists prior to the pandemic. Perhaps because almost no one alive in 2020 had experienced a global health emergency on the same scale as the Covid pandemic, many of their warnings went unheeded.

In his inquiry witness statement, Richard Hughes, the chair of the Office for Budget Responsibility, summed the situation up as follows: “While it may be difficult to predict when catastrophic risks will materialise, it is possible to anticipate their broad effects if they do. The risk of a global pandemic was at the top of government risk registers for a decade before coronavirus arrived, but attracted relatively little (and in hindsight far too little) attention …”

It’s clear from the evidence the inquiry has heard thus far that in the period leading up to the appearance of Covid attention was diverted from pandemic preparedness and towards Brexit. Emma Reed, the civil servant who took over responsibility for preparedness at the Department for Health and Social Care in 2018, said in her evidence that preparing for a no-deal Brexit took precedence over ensuring adult social and community care were bolstered. For a whole year between November 2018 and November 2019, the cross-government Pandemic Flu Readiness Board did not meet once, sidelined by Brexit preparations.

The inquiry has also heard that only eight of the 22 recommendations of the 2016 pandemic preparedness exercise – Cygnus – were fully implemented by the time the pandemic hit. Professor Dame Sally Davies – the chief medical officer (CMO) during Cygnus – specifically drew attention to a shortage of medical ventilators at the time. Despite this, six weeks after the UK’s first Covid cases Matt Hancock was pleading with British manufacturers, “If you produce a ventilator, we will buy it. No number is too high.” Despite David Cameron and George Osborne’s protestations that the austerity programme they implemented during their tenures did not leave the country ill-prepared ahead of the pandemic, others have presented evidence to the contrary.

Dame Sally Davies argued that the UK had “disinvested” in public health infrastructure, which directly affected public health resilience and left the UK “at the bottom of the table on the number of doctors, number of nurses, number of beds, number of IT units, number of ventilators”. Dame Jenny Harries, head of the UK Health Security Agency, testified that public health budgets were reduced as a result of austerity leaving public health protection services “denuded”.

Although the warnings were there, it’s clear that they were not heeded. Officials choose to distribute resources to other projects. Choosing not to prepare for a given eventuality is an implicit prediction about the future. Failing to stockpile personal protective equipment or build health service capacity are the actions of a country implicitly betting against a pandemic.

Fathoming the future is not just about the “positive” predictions we explicitly formulate, but also the “negative” predictions we don’t. The latter, by their absence, are often harder to spot, but as we have seen with the UK’s pandemic preparations, their failure can be equally damaging.

Why mathematicians sometimes get Covid projections wrong

This post is adapted from my Guardian Article of the same title originally published on 26/01/22

Modelling may not be a crystal ball, but it remains the best tool we have to predict the future

Official modelling efforts have been subjected to barrages of criticism throughout the pandemic, from across the political spectrum. No doubt some of that criticism has appeared justified – the result of highly publicised projections that never came to pass. In July 2021, for instance, the newly installed health secretary, Sajid Javid, warned that cases could soon rise above 100,000 a day. His figure was based on modelling from the Scientific Pandemic Influenza Group on Modelling, known as SPI-M.

One influential SPI-M member, Prof Neil Ferguson, went further and suggested that, following the “freedom day” relaxation of restrictions on 19 July, the 100,000 figure was “almost inevitable” and that 200,000 cases a day was possible. Cases topped out at an average of about 50,000 a day just before “freedom day”, before falling and plateauing between 25,000 and 45,000 for the next four months.

It is incredibly easy to criticise a projection that didn’t come true. It’s harder, however, to find out which were the assumptions that made the projection wrong. And, of course, it’s harder still to do that before the projection has been shown to be incorrect. But that is what we ask our modellers to do, and we are quick to complain when their projections do not match reality. Much of the criticism they have received, however, has been misplaced, born out of fundamental misunderstandings of the purpose of mathematical modelling, what it is capable of – and how its results should be interpreted.

Mathematical models are predicated on assumptions – how effective the vaccine is, how severe a variant is, what the impact of imposing or lifting whole rafts of mitigations will be. In trying to put a number on even these few unknowns, let alone the tens or even hundreds of others needed to represent reality, modellers are often searching in the dark with weak torches. That is why broad ranges of scenarios are modelled, and why strict caveats about the uncertainty in the potential outcomes typically accompany modelling reports.

Mathematicians will be the first to tell you that the output of their models are “projections” predicated on their assumptions, not “predictions” to be viewed with certainty. To be fair to him, when Ferguson suggested the figure of 200,000 cases a day, he placed it in the context of the substantial uncertainty surrounding the projection. “And that’s where the crystal ball starts to fail,” he said, “… it’s much less certain.”

Unfortunately, such caveats often get lost when modelling is simplified and turned into attention-grabbing headlines. One accusation levelled at UK modelling is that projections are often presented in the media with insufficient accompanying context. While it isn’t always possible to expect modellers who are working flat-out to find time to do media rounds, the resulting communication vacuum can leave results open to misinterpretation or exploitation by bad-faith actors.

Critics of modelling also fail to acknowledge that highly publicised projections can become self-defeating prophecies.Top of the list of the Spectator’s “The ten worst Covid data failures” in the autumn of 2020 was “Overstating of the number of people who are going to die”. The article referred to the fact that Imperial College modellers’ infamous projection – that the UK would see 250,000 deaths in the absence of tighter measures – never came to pass. The Imperial model is widely credited with causing people to change their behaviour and with eventually ushering in the first UK lockdown a week later, thus averting its own alarming projections. Given that the UK has already passed 175,000 Covid deaths, it isn’t hard to imagine that upwards of 250,000 could have died as the result of an unmitigated epidemic wave.

There have been scenarios in which modellers have taken missteps. Modellers often attempt to answer questions about subjects on which they are not experts. They need to collaborate closely with individuals and organisations who have relevant expertise. When considering care homes in the first wave of the pandemic, for instance, a number of salient risk factors – including the role of agency staff covering multiple care homes – were known to industry practitioners but were not anticipated by the mathematicians. These considerations meant that recommendations based on the modelling may have been unsound. There were more than 27,000 excess deaths in care homes during the first wave of the pandemic in England and Wales.

Data sharing between modelling groups has also been identified as an area that needs improvement. Early on in the pandemic, unequal access to data and poor communication were implicated in modelling results that suggested the UK’s epidemic trajectory was further behind Italy than it was, possibly contributing to a delay in our first lockdown. In these respects the pandemic has been a very public learning process for mathematicians.

Every time someone interprets data – from professional mathematicians and politicians to the general public – they are using a model, whether they acknowledge it or not. The difference is that good modellers are upfront about the assumptions that influence their outcomes. If you don’t agree with the underlying assumptions then you should feel free to take issue with the projections – but dismissing conclusions because they don’t fit a worldview is naive, at best.

Despite these reservations, modelling remains the best tool we have to predict the future. It provides a framework to formalise our assumptions about the scenarios we are trying to represent and to suggest what might happen under different policy options. It is always a better option than relying on the gut feelings, “common sense” or plain old wishful thinking that would replace it.