That reality has hit hard in the UK, whose influx of investment into health tech comes thanks to the internationally respected, centralized National Health Service that has tested new technology through a special department called NHSX.
Health tech refers to a market in which companies use technology to solve healthcare problems. These range from chatbots to help patients triage symptoms of an illness to fitness trackers to monitor a patient’s vital signs with a fitness tracker to machine-learning algorithms to make hospital waiting rooms more efficient. A growing cohort of mental health apps for consumers offers to help people manage stress or sleep better. Many of these systems say they use artificial intelligence, which can give them a funding boost in private markets.
In fact, funding for health-tech startups has soared in the UK from $420 million in 2016 to approximately $3.8 billion in 2021 according to data from database management firm Dealroom and London promotional agency London & Partners. That put Britain in third place behind the US and China for health-tech investment last year.(1)
That funding is driven by the Golden Triangle of academic expertise between London, Oxford and Cambridge, which covers five of the world’s top 25 universities for life sciences and medicines.
But some of the country’s more mature health-tech firms, which got into this game early, are going through something of a midlife crisis, exacerbated by the wider loss of momentum in the pandemic health-tech boom in the US
Part of the problem, according to staff and entrepreneurs from multiple health-tech firms, has been a clash of cultures between the ambitious and iterative world of engineering — where problems can be solved with the right algorithm — and the world of medicine, which calls for a more keen approach. Medical researchers at health-tech firms have complained of being steamrolled by the move-fast-and-break-things approach of highly paid software engineers. The techies, for their part, complain of being unable to experiment freely in a world obsessed with patient safety and regulation.
The resulting stumbles from this culture clash not only hurts company profits, it also threatens to corrode patient trust in the NHS and other healthcare systems.
Among the more affected British players is Sensyne Health Plc, which uses artificial intelligence to analyze patient records to help pharmaceutical companies develop new medicines. To get that data, Sensyne has signed agreements with a handful of NHS trusts, such as Great Ormond Street Hospital for Children and Exeter NHS Trust; Together they own a 16.2% stake in the firm in return for sharing patient data that the company says is anonymized.
But Sensyne found itself on the brink of collapse last month, after the company said it was on the verge of running out of cash and cutting the majority of its staff, according to Sky News. The company had been fined £400,000 ($495,000) by the London Stock Exchange in November for failing to disclose bonus payments to its chief executive officer, a British science minister who stepped down last month. Publicly, the company said that it suffered contract delays due to the Covid-19 pandemic. But its shift away from developing algorithms to selling access to an analytics platform, as described on its website, also speaks to the challenge of applying cutting edge AI to complex challenges in medicine.
Another high flying health-tech startup, Babylon Health, has seen its shares fall by nearly 87% since it went public last October through a blank-check company merger that valued it at $4.2 billion. It’s now worth about $528 million. The company has heavily marketed its use of artificial intelligence to give diagnostic advice to patients through a symptom-checker on its app, but doctors have warned that it has given unsafe information through the checker. Babylon, in response, publicly an oncologist who criticized its symptom checker as a “troll” who “tweeted defamatory content about us.”
Signs are pointing to artificial intelligence falling short of its promise more generally in medicine. Multiple clinical studies published last year showed that nearly all artificial intelligence tools used to try and predict a diagnosis of Covid 19 made no real difference or were potentially harmful. A separate study published in the British Medical Journal last year also found that 94% of AI systems that scanned for signs of breast cancer were less accurate than the analysis of a single radiologist.
More disturbing than any failed experiments is that some patients risk their privacy when AI in healthcare goes wrong. Despite saying they anonymize patient data to train their algorithms, some health-tech firms don’t keep that information 100% confidential, according to Phil Booth, coordinator of British data-privacy campaign organization medConfidential. His organization sent a letter in April warning several NHS trusts that the patient data they were providing to one health-tech company was actually identifiable because it could be linked back to certain markers.
“This is not an AI problem,” said Booth. “It’s technology coming into healthcare thinking it can outperform expert human beings at treating other human beings.”
It seems that when technology fails in that regard, humans pay the price.
More From This Writer and Others at Bloomberg Opinion:
Tech Stocks Are Entering an Age of Uncertainty: Parmy Olson
The Davos Set Is Reborn in the Crypto Metaverse: Lionel Laurent
China’s Tech Companies Get a Reprieve, Not a Pardon: Tim Culpan
(1) US health tech startups attracted approximately $32 billion in VC investment for most of 2021, along with $4.1 billion in China and $3.8 billion in the UK as of November 2021, according to Dealroom and London & Partners.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”
More stories like this are available on bloomberg.com/opinion