New Weapons and new Warnings Over Health Research


Applied Clinical Trials

Artificial intelligence is a "new weapon" in healthcare research.

Artificial intelligence is a "new weapon" in healthcare research, according to the UK prime minister, Theresa May, speaking in the north of England in late May. Determined to talk up the UK's capacities in life sciences as she negotiates her country's departure from the European Union, she urged the national health service and research-based firms to make fuller use of AI to "transform" diagnosis of life-threatening diseases.

"The development of smart technologies to analyze great quantities of data quickly and with a higher degree of accuracy than is possible by human beings opens up a whole new field of medical research," she said, highlighting the role of computer algorithms in inferring conclusions from information gleaned through patients' medical records, genetic data, and lifestyle habits.

Her officials have been throwing around forecasts that AI could help prevent tens of thousands of cancer deaths every year, and boost the battle to overcome heart disease, diabetes, and dementia, and May's speech triggered a chorus of support from health organizations and research charities. Cancer Research UK, which claims that halving the number of lung, bowel, prostate, and ovarian cancers diagnosed at an advanced stage could prevent thousands of deaths a year, described the government's plans as "pioneering".

But a report from the UK's Nuffield Council on Bioethics, issued the same week, raised what it described as "important questions" about the use of AI in healthcare. While AI has the potential to make healthcare more efficient and patient-friendly by speeding up and reducing errors in diagnosis and helping avoid human bias and error, the report says, it focuses attention on crucial issues of liability, dignity, and security. Hugh Whittall, Director of the Nuffield Council on Bioethics, says "the challenge will be to ensure that innovation in AI is developed and used in a ways that are transparent, that address societal needs, and that are consistent with public values.”

The report offers plenty of initiatives where AI holds out new hope. It cites the Institute of Cancer Research’s canSAR database that combines genetic and clinical data from patients with information from scientific research, and uses AI to make predictions about new targets for cancer drugs. It notes the AI ‘robot scientist’ called Eve that is designed to make drug discovery faster and more economical. It recognizes that AI systems used in healthcare could also be valuable for medical research by helping to match suitable patients to clinical studies. And it notes examples of AI being used to predict adverse drug reactions-which are estimated to cause up to 6.5 per cent of hospital admissions in the UK.

However, underlying concerns still need to be addressed, insists the report. Clinical practice often involves complex judgments and abilities that AI currently is u also debate about whether some human knowledge is tacit and cannot be taught, it adds. It evokes a 2015 clinical trial in which an AI app was used to predict which patients were likely to develop complications following pneumonia, and therefore should be hospitalized. "This app erroneously instructed doctors to send home patients with asthma due to its inability to take contextual information into account", it states. And it emphasises the need that patients and healthcare professionals have for trust-noting that clinical trials of IBM’s Watson Oncology were "reportedly halted in some clinics as doctors outside the US did not have confidence in its recommendations, and felt that the model reflected an American-specific approach to cancer treatment".

The report also points to more technical challenges, including the limitation of its use by the quality of available health data. Medical records are not consistently digitized, and current healthcare IT systems lack interoperability and standardization, digital record keeping, and data labelling. But ultimately the greatest challenge, the report suggests, may lie in the intrinsic nature of AI itself, its inability to possess human characteristics such as compassion.

As a telling nuance on Theresa May's optimistic encouragement to industry, the Nuffield report points out that AI has applications in fields that are subject to regulation, such as data protection, research, and healthcare, and its development is so "fast-moving and entrepreneurial" that it "might challenge these established frameworks." For Nuffield, the key question is not whether AI should be regulated, but only whether it should be regulated as a distinct area, or whether different areas of regulation should be reviewed with the possible impact of AI in mind."


Peter O'Donnell is a freelance journalist who specializes in European health affairs and is based in Brussels, Belgium

© 2024 MJH Life Sciences

All rights reserved.