“Regulators have been very clear in their positions and guidance on how to use AI in drug development more safely, effectively, and in line with their expectations.”
FDA and EMA Align on Ten Principles to Guide Artificial Intelligence Use in Drug Development
Key Takeaways
- The FDA and EMA have developed ten guiding principles for AI use in drug development, promoting regulatory alignment and innovation.
- These principles emphasize a human-centric, risk-based approach, focusing on data governance, multidisciplinary expertise, and transparent model development.
The FDA and EMA have aligned on ten guiding principles for the responsible use of artificial intelligence across the drug development lifecycle, establishing a shared framework to support innovation, regulatory consistency, and patient safety.
The FDA and EMA have jointly identified ten guiding principles for good artificial intelligence (AI) practice across the medicines lifecycle, marking a significant step toward regulatory alignment on the use of advanced technologies in drug development.1
The principles provide high-level guidance on the use of AI in evidence generation and monitoring, spanning early research, clinical trials, manufacturing, and post-market safety surveillance. They are intended to inform sponsors, marketing authorization applicants, and authorization holders, while laying the foundation for future AI-specific guidance in both jurisdictions.2
Establishing a shared framework for responsible AI adoption
The joint principles are designed to support the safe, effective, and compliant use of AI technologies while enabling innovation across the drug development lifecycle. Regulators emphasized that the framework will underpin future guidance and help advance international collaboration among regulatory agencies, standards organizations, and other stakeholders.
“The guiding principles of good AI practice in drug development are a first step of a renewed EU-US cooperation in the field of novel medical technologies,” said Olivér Várhelyi, European Commissioner for Health and Animal Welfare, in an EMA news release. “They show how we can work together on both sides of the Atlantic to preserve our leading role in global innovation, while ensuring the highest level of patient safety.”
Guideline development in the European Union is already underway, building on the EMA’s AI reflection paper published in 2024.
Supporting AI across the full medicines lifecycle
According to the FDA, AI is increasingly being used to generate and analyze evidence across nonclinical, clinical, manufacturing, and post-marketing phases. The agencies noted that while AI has the potential to reduce development timelines, improve decision-making, and enhance pharmacovigilance, its complexity requires careful oversight throughout its lifecycle.3
The ten principles emphasize a human-centric, risk-based approach to AI adoption, with proportional validation, clear definition of context of use, adherence to applicable standards, and robust data governance. They also call for multidisciplinary expertise, transparent model development practices, lifecycle performance monitoring, and clear communication of AI limitations and outputs to users and patients.
Aligning regulatory expectations amid rising AI adoption
Regulators said the principles are intended to create a common foundation for good practice as AI adoption accelerates across the industry. The FDA’s Center for Drug Evaluation and Research (CDER) has seen a significant increase in submissions incorporating AI components in recent years, spanning clinical trials, real-world data analytics, and digital health technologies.
The joint effort builds on prior FDA and EMA collaboration following a bilateral meeting in April 2024 and aligns with the European Medicines Agencies Network Strategy to 2028, which prioritizes data, digitalization, and AI to support regulatory decision-making.
From the FDA’s perspective, the principles complement ongoing work to establish a risk-based regulatory framework that enables innovation while protecting patient safety. The agency has previously issued draft guidance on the use of AI to support regulatory decision-making and has coordinated internal AI governance through the establishment of the CDER AI Council.
Laying the groundwork for future guidance and convergence
Both agencies emphasized that the principles are not prescriptive requirements, but rather a foundation that will evolve alongside the technology. Over time, they are expected to be supplemented by additional guidance reflecting emerging use cases, scientific advances, and applicable legal frameworks.
With ethics and patient safety positioned as core considerations, the FDA and EMA said they will continue to pursue global convergence on AI-related topics in collaboration with international public health partners, supporting responsible innovation while enabling broader adoption of AI-driven approaches in drug development.
Earlier commentary on regulatory oversight of AI
In 2025, Applied Clinical Trials spoke with Jon Walsh, founder and chief scientific officer of Unlearn.AI to gain insight into the regulatory implications of using AI-generated data in trials.
“Regulators have been very clear in their positions and guidance on how to use AI in drug development more safely, effectively, and in line with their expectations,” Walsh explained during a video
References
1. EMA and FDA set common principles for AI in medicine development. News release. EMA. January 14, 2026. Accessed January 28, 2026.
2. Guiding Principles of Good AI Practice in Drug Development. FDA. EMA. January 2026. Accessed January 28, 2026.
3. Artificial Intelligence for Drug Development. FDA. January 14, 2026. Accessed January 28, 2026.
Newsletter
Stay current in clinical research with Applied Clinical Trials, providing expert insights, regulatory updates, and practical strategies for successful clinical trial design and execution.




