Tag Archives: How to Expect the Unexpected

How do we know health screening programmes work?

This post is adapted from my Conversation article of the same title originally published on 30/07/23

The UK is set to roll out a national lung cancer screening programme for people aged 55 to 74 with a history of smoking. The idea is to catch lung cancer at an early stage when it is more treatable.

Quoting NHS England data, the health secretary, Steve Barclay, said that if lung cancer is caught at an early stage, “patients are nearly 20 times more likely to get at least another five years to spend with their families”.

Five-year survival rates are often quoted as key measures of cancer treatment success. Barclay’s figure is no doubt correct, but is it the right statistic to use to justify the screening programme?

Time-limited survival rates (typically given as five-, ten- and 20-year) can improve because cancers caught earlier are easier to treat, but also because patients identified at an earlier stage of the disease would live longer, with or without treatment, than those identified later. The latter is known as “lead-time bias”, and can mean that statistics like five-year survival rates paint a misleading picture of how effective a screening programme really is.

A graphic to illustrate the impact of lead-time bias on the perceived survival length of a disease detected with screening v symptoms.
Lead-time bias can appear to make a treatment more effective than it actually is, if the perceived post-diagnosis survival time increases while the course of disease progression is unaffected. Kit Yates

My new book, How to Expect the Unexpected, tackles issues exactly like this one, in which subtleties of statistics can give a misleading impression, causing us to make incorrect inferences and hence bad decisions. We need to be aware of such nuance so we can identify it when it confronts us, and so we can begin to reason our way beyond it.

To illustrate the effect of lead-time bias more concretely, consider a scenario in which we are interested in “diagnosing” people with grey hair. Without a screening programme, greyness may not be spotted until enough grey hairs have sprouted to be visible without close inspection. With careful regular “screening”, greyness may be diagnosed within a few days of the first grey hairs appearing.

People who obsessively check for grey hairs (“screen” for them) will, on average, find them earlier in their life. This means, on average, they will live longer “post-diagnosis” than people who find their greyness later in life. They will also tend to have higher five-year survival rates.

But treatments for grey hair do nothing to extend life expectancy, so it clearly isn’t early treatment that is extending the post-diagnosis life of the screened patients. Rather, it’s simply the fact their condition was diagnosed earlier.

To give another, more serious example, Huntington’s disease is a genetic condition that doesn’t manifest itself symptomatically until around the age of 45. People with Huntington’s might go on to live until they are 65, giving them a post-diagnosis life expectancy of about 20 years.

However, Huntington’s is diagnosable through a simple genetic test. If everyone was screened for genetic diseases at the age of 20, say, then those with Huntington’s might expect to live another 45 years. Despite their post-diagnosis life expectancy being longer, the early diagnosis has done nothing to alter their life expectancy.

Overdiagnosis

Screening can also lead to the phenomenon of overdiagnosis.

Although more cancers are detected through screening, many of these cancers are so small or slow-growing that they would never be a threat to a patient’s health – causing no problems if left undetected. Still, the C-word induces such mortal fear in most people that many will, often on medical advice, undergo painful treatment or invasive surgery unnecessarily.

The detection of these non-threatening cancers also serves to improve post-diagnosis survival rates when, in fact, not finding them would have made no difference to the patients’ lives.

So, what statistics should we be using to measure the effectiveness of a screening programme? How can we demonstrate that screening programmes combined with treatment are genuinely effective at prolonging lives?

The answer is to look at mortality rates (the proportion of people who die from the disease) in a randomised controlled trial. For example, the National Lung Screening Trial (NLST) found that in heavy smokers, screening with low-dose CT scans (and subsequent treatment) reduced deaths from lung cancer by 15% to 20%, compared with those not screened.

So, while screening for some diseases is effective, the reductions in deaths are typically small because the chances of a person dying from any particular disease are small. Even the roughly 15% reduction in the relative risk of dying from lung cancer seen in the heavy smoking patients in the NLST trial only accounts for a 0.3 percentage point reduction in the absolute risk (1.8% in the screened group, down from 2.1% in the control group).

For non-smokers, who are at lower risk of getting lung cancer, the drop in absolute risk may be even smaller, representing fewer lives saved. This explains why the UK lung cancer screening programme is targeting older people with a history of smoking – people who are at the highest risk of the disease – in order to achieve the greatest overall benefits. So, if you are or have ever been a smoker and are aged 55 to 74, please take advantage of the new screening programme – it could save your life.

But while there do seem to be some real advantages to lung cancer screening, describing the impact of screening using five-year survival rates, as the health secretary and his ministers have done, tends to exaggerate the benefits.

If we really want to understand the truth about what the future will hold for screened patients, then we need to be aware of potential sources of bias and remove them where we can.

The hidden dangers of two-party politics

This post is adapted from my Indy Voices article of the same title originally published on 26/01/22

With local elections upon us, many voters feel like they are faced with the same old tired choice between the two major political parties – Labour and the Conservatives.

Despite the existence of third candidates, election leaflets come through the door telling us “Lib Dems can’t win here” – effectively arguing the case for a straight fight between the two major parties in British politics. But, as we look forwards to the general election next year, there are sound mathematical reasons why two parties battling it out – a duel so to speak – is not good for democracy.

In a two-horse race, declines in popularity for one candidate are equivalent to gains in popularity for the other. If it is harder to boost one’s own image than it is to denigrate the other party, then the incentive is for the parties to batter each other with negative advertising, leaving the electorate to choose between a rock and a hard place. The introduction of a genuinely electable third party can change the campaigning dynamics from a straight duel to a “truel” – a battle between three parties.

Truels are a popular trope in the cinema, having been used to resolve plot issues in at least three Quentin Tarantino movies alone. Probably the most well-known example, though, features in one of the most famous movie scenes of all time: towards the climax of The Good, the Bad and the Ugly, the three eponymous characters stand in a triangle on the perimeter of a circular plaza each with hands hovering around their waists ready to draw. I won’t spoil the ending.

As I explore in my new book, How to Expect the Unexpected, truels can have strange and unexpected outcomes if the players’ strengths differ markedly. The strongest candidates may tend to focus their efforts on each other as the greatest threat to one another, sometimes leaving the weaker candidate with the best chance of winning.

A favourable strategy for weaker participants in a multiplayer competitive game – namely staying in the background while the best fighters duel it out – has been arrived at naturally over and over again in the animal kingdom. While two of the most impressive specimens fight it out, killing or injuring each other, subordinate males can nip in and mate with the female.

So well established is this practice across the animal kingdom that it has its own name. Kleptogamy is derived from the Greek words klepto, meaning “to steal” and gamos, meaning “marriage” or more literally “fertilisation”. The evolutionary game theorist, John Maynard-Smith – who came up with the theoretical idea of kleptogamy – preferred to call it the “sneaky f***er” strategy.

Returning to politics, in the run-up to the June 2009 Virginia Democratic gubernatorial primaries in the US, state senator Creigh Deeds was floundering. In one January poll he registered just 11 per cent support.

Over the next four months he only polled higher than 22 per cent once, as the other two candidates, Terry McAuliffe and Brian Moran, swapped the polling lead between themselves. Deeds’ fundraising campaign was also stuttering. In the first quarter of 2009 – a crucial period ahead of the election, he had raised just $600,000 compared to Moran’s $800,000 and McAuliffe’s $4.2 million. But in mid-May the game suddenly changed.

The candidates began to plough much of their remaining resources into negative advertising. Moran went hard at his main rival McAuliffe, criticising his record as a businessman. McAuliffe responded to his biggest threat Moran with his own ad, defending his record and accusing Moran of “trying to divide Democrats”. Moran hit out again, criticising McAuliffe’s campaign against incumbent president, Barack Obama, in the Democratic primaries preceding the 2008 election. Moran hoped that this would diminish McAuliffe’s standing in the eyes of the state’s crucial African American voters.

All the while, as the top two candidates chipped away at each other’s reputations, unassuming underdog, Creigh Deeds, was planting seeds of positivity with his self-promoting advertising campaign. When the Washington Post came out and endorsed Deeds in late May, many undecided voters recognised him as a reasonable alternative to the two former frontrunners.

Deeds’ popularity in the polls shot up and by early June he was polling at over 40 per cent. Each of the formerly stronger rivals seemed to have managed to convince Virginian voters that the other was not electable. In elections on 8 June Deeds won just under 50 per cent of the vote to McAuliffe’s 26 per cent and Moran’s 24 per cent – a landslide for the weakest candidate.

The assumption of a two-party system does the electorate a disservice. Our democracy would be healthier if genuine multiparty politics were a reality, keeping all the parties honest and disincentivising negative advertising.

Different voting systems, such as the proportional representation or the alternative vote, favoured by other countries, might be a way to achieve this. They have the advantage that no vote is wasted – people feel free to vote for their preferred candidate rather than the candidate who is most likely to beat the only viable alternative.

Until the UK arrives at a system that incentivises genuinely multiparty politics, we will be stuck choosing between the red devil and the deep blue sea.

What your name really means and how it can affect your life

This post is adapted from my Indy Voices article of the same title originally published on 17/07/23

What do these three people have in common? Usain Bolt, the world’s fastest man, Margaret Court, the former world-number-one tennis player, and Thomas Crapper, the plumber and toilet designer who, contrary to popular belief, did not actually give an abbreviated form of his name to a slang word for defecation.

You can probably guess it straight away. Their names are all aptronyms – names which are particularly suited to their owners.

Possibly less well-known examples, but arguably even better fitting are the Jamaican cocaine trafficker Christopher Coke, the British judge Igor Judge, and the American columnist Marilyn vos Savant (who between 1985 and 1989 was listed in the Guinness Book of Records as having the world’s highest IQ).

Sara Blizzard, Dallas Raines and Amy Freeze are all television weather presenters, Russell Brain is a British neurologist and Michael Ball is a former professional footballer. I could go on. Some of these examples seem almost too apposite to have happened by chance.

Some scientists suggest that the reason these people ended up being renowned for their particular speciality is a result of the influence, from an early age, of the name they bore. The hypothesis that such causative links exists is known as nominative determinism – a self-fulfilling prophecy I investigate in more detail in my new book, How to Expect the Unexpected.

One proposed explanation for why people might be drawn to professions which fit their name is a psychological phenomenon known as implicit egotism – the conjecture that people exhibit an often unconscious preference for things associated with themselves. That might be marrying someone with the same birthday, donating to good causes with a name that begins with their initial, or gravitating toward a job which relates to their name.

In support of this idea, James Counsell mused on his eventual career path as a barrister: “How much is down to the subconscious is difficult to say, but the fact that your name is similar may be a reason for showing more interest in a profession than you might otherwise.”

There are a limited number of studies which purport to provide evidence that nominative determinism is a real phenomenon. Perhaps the most amusing of these studies was conducted in 2015 by a family of doctors and soon-to-be doctors: Christopher, Richard, Catherine and David Limb. Together the four Limbs clearly had a vested interest in understanding whether their appendage-related name had drawn them towards their anatomically focussed professions. Indeed, given the vocation of David Limb as an orthopaedic surgeon (specialising in shoulder and elbow surgery), the Limbs decided to ask a more in-depth question – whether a doctor’s name could influence their medical specialisation.

By analysing the general medical council’s register, they found that the frequency of names relevant to medicine and its specialities was far greater than would be expected by chance alone. One in every 21 neurologists had a name directly relevant to medicine, like Ward or Kurer, although far fewer had names relevant to that particular speciality – no Brains or Parkinsons, for example.

The specialities next most likely to have medically relevant names were genitourinary medicine and urology. The doctors in these subfields also had the highest proportion of names directly relevant to their speciality, including Ball, Koch, Dick, Cox, a single Balluch, and even a Waterfall. As the Limbs pointed out in their paper, this may have had something to do with the wide array of terms that exists for the parts of the anatomy relevant to these subfields.

Ironically, despite the purported evidence for the phenomenon, the fact that the two younger Limbs followed their parents into their profession hints at a strong role for familial influence in determining careers (in medicine, at least).

Before we decide whether we believe that our names can influence our future trajectories though, it’s important we remember that for every aptronym we hear about, there are plenty of Archers, Taylors, Bishops and Smiths, for example, whose names do not have a clear correlation with corresponding employment. It is also important to remember that correlation does not imply causation. Not every aptronym is an example of nominative determinism.

Whether or not nominative determinism is a self-fulfilling prophecy, or just a fancy name given to a series of amusing coincidences, finding examples of aptronyms like the lawyer Sue You, the Washington news bureau chief William Headline, the pro-tennis player Tennys Sandgren, or the novelist Francine Prose will always make me smile.

Here’s what we can do to prepare for the next pandemic

This post is adapted from my Indy Voices article of the same title originally published on 23/06/23

Phase one of the Covid inquiry has begun in earnest. Its remit is pandemic preparedness.

Nominally the guiding ethos of the inquiry is not to lay the blame at anyone’s door, but to learn lessons from the mistakes we made in preparing for the Covid pandemic so that we are better prepared for the next one.

The inquiry shares this premise with my book How to Expect the Unexpected. My focus in the book is to highlight the mistakes people have made when making predictions in the past, to identify the root causes of these failures and to suggest strategies that mean they are not revisited in the future or, better still, are rendered completely unrepeatable.

When it comes to thinking about the future, every plan we make represents a wager against the world’s uncertainties. Preparation is no different. The degree to which we prepare represents the trade-off between what we are willing to sacrifice now to hedge our bets against the vagaries of the future.

In thinking about pandemic preparedness, we can draw lessons from other areas of disaster emergency planning. In the UK, for example, we don’t routinely prepare for earthquakes because the chances of experiencing large-magnitude earthquakes in this country are extremely low. In contrast, Japan routinely spends upwards of three per cent of its annual budget on disaster risk management. Since strengthening disaster preparedness began in Japan in the late 1950s, average annual deaths have been reduced from the thousands to the low hundreds.

In How to Expect the Unexpected, I argue that although it is impossible to predict exactly when the next earthquake will hit, seismologists are confidently able to predict the frequencies of earthquakes in shock-prone areas and we are able to use that knowledge to help us to prepare and prioritise resources.

Public health scientists had been telling the same sorts of stories as the seismologists prior to the pandemic. Perhaps because almost no one alive in 2020 had experienced a global health emergency on the same scale as the Covid pandemic, many of their warnings went unheeded.

In his inquiry witness statement, Richard Hughes, the chair of the Office for Budget Responsibility, summed the situation up as follows: “While it may be difficult to predict when catastrophic risks will materialise, it is possible to anticipate their broad effects if they do. The risk of a global pandemic was at the top of government risk registers for a decade before coronavirus arrived, but attracted relatively little (and in hindsight far too little) attention …”

It’s clear from the evidence the inquiry has heard thus far that in the period leading up to the appearance of Covid attention was diverted from pandemic preparedness and towards Brexit. Emma Reed, the civil servant who took over responsibility for preparedness at the Department for Health and Social Care in 2018, said in her evidence that preparing for a no-deal Brexit took precedence over ensuring adult social and community care were bolstered. For a whole year between November 2018 and November 2019, the cross-government Pandemic Flu Readiness Board did not meet once, sidelined by Brexit preparations.

The inquiry has also heard that only eight of the 22 recommendations of the 2016 pandemic preparedness exercise – Cygnus – were fully implemented by the time the pandemic hit. Professor Dame Sally Davies – the chief medical officer (CMO) during Cygnus – specifically drew attention to a shortage of medical ventilators at the time. Despite this, six weeks after the UK’s first Covid cases Matt Hancock was pleading with British manufacturers, “If you produce a ventilator, we will buy it. No number is too high.” Despite David Cameron and George Osborne’s protestations that the austerity programme they implemented during their tenures did not leave the country ill-prepared ahead of the pandemic, others have presented evidence to the contrary.

Dame Sally Davies argued that the UK had “disinvested” in public health infrastructure, which directly affected public health resilience and left the UK “at the bottom of the table on the number of doctors, number of nurses, number of beds, number of IT units, number of ventilators”. Dame Jenny Harries, head of the UK Health Security Agency, testified that public health budgets were reduced as a result of austerity leaving public health protection services “denuded”.

Although the warnings were there, it’s clear that they were not heeded. Officials choose to distribute resources to other projects. Choosing not to prepare for a given eventuality is an implicit prediction about the future. Failing to stockpile personal protective equipment or build health service capacity are the actions of a country implicitly betting against a pandemic.

Fathoming the future is not just about the “positive” predictions we explicitly formulate, but also the “negative” predictions we don’t. The latter, by their absence, are often harder to spot, but as we have seen with the UK’s pandemic preparations, their failure can be equally damaging.