Beyond the Black Box: Tailoring AI Regulation in Healthcare

News
Article

As adoption increases, regulatory bodies are becoming more involved in the oversight of artificial intelligence.

© Shuo - © Shuo - stock.adobe.com

Image Credit: © Shuo - stock.adobe.com

As artificial intelligence (AI) technology rapidly evolves, regulators around the world are tasked with aligning safety and reliability standards across a broad range of applications. The recent EU AI Act represents a significant step in this direction, introducing a general risk-based framework to classify AI systems. However, concerns about the Act's broad scope and vague definitions of "high-risk" AI applications have sparked fears of potential overregulation, which could stifle innovation, particularly in critical areas like drug development and healthcare where AI has already begun to make significant inroads.

The groundbreaking potential of AI within healthcare is immense, but the current approach of broadly categorizing many AI applications as "high-risk" could lead to overly restrictive regulations that hinder the development of advanced tools. A sector-specific regulatory approach, on the other hand, could provide a more thorough assessment of the risks associated with specific AI applications. This could enable regulatory bodies like the FDA and the EMA to strike a better balance between safety, speed, and innovation. As such, it is crucial that regulatory requirements are tailored to the specific contexts in which AI models are used rather than based solely on the models' characteristics.

For instance, the risk potential of AI models that operate as "black boxes" (where the logic of the model output is not immediately understandable by humans) varies significantly depending on their application. An AI model used for autonomous decision-making (ADM) in patient treatment poses a greater risk if its reasoning is not explainable compared to one used for identifying patterns across populations. The EMA’s draft reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle suggests a "high-risk" classification for all non-transparent models in late-stage clinical development, a broad categorization that may not accurately reflect the true risks of such applications. ​​Applying the high-risk label to all potential use cases for “non-transparent models” is short-sighted, especially considering the three conflicting definitions of model transparency presented within the same reflection paper.

In the case of ADMs, the EU General Data Protection Regulation (GDPR) already mandates that people have a right to an explanation (and in some cases, the right to opt-out entirely) for decisions made by ADM from their personal data. Conversely, for AI uses that pose minimal risk to individuals, this type of explainability requirement may be less appropriate than ensuring reliable and consistent model performance within the specific application (context-of-use), for example. This attention to context-of-use ensures that AI can be used responsibly but without unnecessary constraints, promoting both innovation and patient safety.

To ensure the appropriate evaluation of AI models, government bodies will need to empower regulatory agencies with the essential resources, education, and authority. This support will enable regulators to differentiate between actual and perceived risks, ensuring that requirements are based on the potential impacts of AI applications. To this end, it is promising that the EMA and FDA have been actively seeking engagement with stakeholders using AI in drug development to generate appropriate regulatory guidance surrounding its use. Such collaboration can deepen the understanding of AI's diverse applications and help develop meaningful, context-appropriate regulations that promote safe healthcare innovation.

A nuanced, context-aware regulatory approach will be key to harnessing the benefits of AI in healthcare while maintaining high standards of safety and efficacy. AI holds the potential to significantly accelerate the development of new and effective treatments, but this will only be possible if the regulation of these technologies is sensitive to the specific risks and benefits of different AI applications.

Jess Ross, PhD, Senior Government Affairs Lead, Unlearn.AI

Related Videos
Greg Ball, Founder, ASAP Process Consulting image credit screen shot from video
Related Content
© 2024 MJH Life Sciences

All rights reserved.