“Agents don’t just provide insights. They can execute and string together key workflows. If you propose adding a new site to a study, for instance, an AI agent can create the workflow to make that happen quickly. Ultimately, agents are the antidote to high levels of wasted white space.”
Setting the Limits of Autonomy with Autonomous Agents for Clinical Research
Key Takeaways
- AI agents can significantly reduce drug development timelines by automating low-risk, high-labor activities, enhancing efficiency in clinical trials.
- Human-in-the-loop models are favored in clinical research to ensure safety and compliance, with AI recommendations validated by humans.
As pharma wrestles with whether to trust fully autonomous AI, semi-autonomous agents are emerging as a safer middle ground that reduces manual work, eliminates white space in clinical development, and accelerates trial timelines without compromising patient safety.
Artificial intelligence (AI) has quickly evolved from experimental pilot programs to a highly tangible practice across a range of industries. According to Capgemini’s 2025
Yet despite this rapidly accelerating growth, the faith business leaders have in fully autonomous AI systems has slipped markedly over the last year. Capgemini reports that trust in AI agents has fallen sharply from 43% to 27%, largely driven by concerns about privacy, ethics, and algorithmic bias.
Business leaders are facing a unique moment of convergence: how much should we trust this powerful tool and how, exactly, should we safely deploy it going forward?
Across most industries, experimentation comes at relatively low risk, yet that is not the case when it comes to clinical research where patient safety, regulatory compliance and data integrity demand levels of extreme caution. In the cautious and highly regulated world of clinical research, technology reliability is paramount, and leaders are naturally risk averse. Unfortunately, the industry’s understandably low threshold of risk tolerance is a contributing factor why innovation in clinical development has historically lagged behind other industries. In a global economy where many activities are automated, an estimated 70-80% of clinical trial work is still allocated solely to humans. It’s no surprise that it still takes 10-15 years and billions of dollars to bring medical breakthroughs to patients—numbers that haven’t budged in decades.
AI agents offer something uniquely different. When properly applied, they can replace unproductive white space, manual work, and process variability with streamlined, automated, uniform workflows. One top-10 global pharmaceutical company, for instance, estimates that AI agents can reduce drug development timelines by more than half. The company uses AI agents across the life cycle of clinical development in various ways, starting with agents handling low-risk activities like uploading documents into key systems (i.e., eTMFs). The company is exploring many other key use cases, too—from protocol design optimization to next-best-action recommendations that guide decision-making, especially when parameters suddenly change during trials…as they always do.
“We believe orchestrated AI agents will play a big role in running our trials soon,” said an AI strategy executive from the same top-10 global pharmaceutical company. “Already, we see them as a way to reduce timelines from the industry average of over a decade to fewer than four years.”
Keeping humans always in the clinical trial loop
Even with the excitement for the potential of AI agents, caveats remain—namely, what level of autonomy should agents be when used for clinical development? Ultimately, the full benefit of an AI agent is realized when it can be trusted to be fully autonomous, not only coming up with the solution but executing it. However, most life sciences companies remain cautious and pragmatic, starting out with human-in-the-loop agent models, where a human validates the agent’s recommendation before acting and guardrails are built into the AI models. Some companies may never go beyond that while others will expand agents’ independence over time. As the use of AI agents continues to grow, the balance of human-to-machine intervention in the clinical research industry will shift, maybe even ebb and flow.
The key is attaining the right mix of AI and human efforts designed specifically to enhance the reliability and efficiency of clinical trials.
Start leveraging agents for low-risk, high-labor activities. For example, clinical trial monitoring is time and resource-intensive—raising queries, visiting sites, reporting, and much more—and there are too many different systems to be helpful. An agent designed to offload these types of laborious activities can optimize clinical research associates’ time. Similarly, agents can help offload the significant administrative burden in data management. From data capture, analysis, and sharing or uploading into regulatory submission documents, data management is another area that can be dramatically streamlined with agents.
Cost of white space in clinical development
Few industries operate with as much built-in inefficiency as clinical research; “white space,” or unproductive gaps between trial phases, account for at least
Agents can eliminate many of these bottlenecks with minimal human intervention. Semi-autonomous AI agents can replace repetitive, highly manual activities that contribute to white space, such as follow-up and administrative activities (i.e., Managing routine correspondence with study coordinators and site personnel; and generating standard progress reports and dashboard updates) and data processing and analysis such as:
- Performing initial data cleaning and validation checks
- Highlighting risks to site and study health through data review across multiple systems
- Creating patient recruitment and enrollment reports
- Conducting routine database queries for operational metrics
- Preparing standardized interim safety data summaries
“Agents don’t just provide insights,” explained the pharmaceutical AI executive. “They can execute and string together key workflows. If you propose adding a new site to a study, for instance, an AI agent can create the workflow to make that happen quickly. Ultimately, agents are the antidote to high levels of wasted white space.”
Where do we draw the line?
By definition, agents can act autonomously and while autonomy has obvious benefits, including greater speed and efficiency, it comes with some clear risks. We know and accept that human error is inevitable, and that it may occur every few minutes or so. Yet an agentic error could replicate itself across thousands of processes in just milliseconds, creating a seismic impact that would be difficult to correct.
“We wouldn’t go fully autonomous in clinical trials. A human-in-the-loop will be required for the foreseeable future for key safety activities, however other admin tasks can be made fully autonomous,” noted the AI executive.
The best option is to deploy semi-autonomous models where agents recommend actions, but humans validate and approve those actions. As reliability grows over time, organizations will increase their confidence and comfort level, choosing to dial human oversight up or down based on their overall level of risk and regulatory comfort.
For example, some mission-critical activities must maintain robust human oversight to ensure regulatory compliance and maintain the integrity of the research process. Regulatory submissions and documentation activities are an example, such as:
- SAE (serious adverse event) reporting: Human clinicians must review, interpret, and approve all serious adverse event reports, as these directly impact patient safety.
- Regulatory submission documents: Creation of CTD (common technical document) sections, INDs (Investigational New Drug applications), and other regulatory filings require human expertise to ensure accuracy, completeness, and compliance with evolving regulatory guidance.
- Protocol amendments: Any changes to study protocols must be reviewed by qualified medical and regulatory professionals to assess clinical implications and regulatory requirements.
Guardrails, risk, and trust
Building trust in AI agents requires both technical rigor and cultural change. Regulators are watching closely, but so far, they are not discouraging use.
“No one is saying don’t use agents. What matters is transparency. We must be able to show the data lineage, demonstrate explainability, and maintain auditable models. We anticipate that regulators will require us to tag data generated by an agent versus a human,” said the same AI expert.
Risk-based frameworks are essential and required by guidelines such as ICH E6 R3. Organizations should assess the likelihood, speed, and impact of a potential error, and then set the appropriate level of autonomy based on their findings. While a document-upload by an agent may operate with near-full independence, a safety-reporting agent might never act without human oversight.
Cultural factors will prove to be just as critical. Change management remains the largest hurdle as AI agents fundamentally reshape how we think about our work. To ensure they become standard practice rather than obstacles, organizations must train their workforces and openly share what they learn.
The road ahead
It’s important to recognize that autonomy in clinical research does not have to be a zero-sum game. Instead, it will evolve as a continuum. Today, most organizations will remain cautiously optimistic and wisely deploy semi-autonomous agents in low-risk areas of their work. As systems prove they are reliable, autonomy will naturally expand and evolve.And as this happens, it will be guided by a variety of risk-based frameworks balanced by transparent, human-in-the-loop style oversight.
Simply put, trust must be earned, and patient safety should always remain paramount. Still, with careful adoption and implementation, AI agents might be the tool that the life sciences industry has been looking for to finally overcome the historic and systemic inefficiencies that have long hindered progress.
The real power of agents is in reducing manual effort to eliminate white space. Even with humans in the loop, that’s a transformative change.
While fully autonomous AI agents may never be appropriate for every aspect of clinical research, dismissing their potential value out of fear is shortsighted. The path forward lies somewhere in the middle. By setting limits and balancing human oversight with machine autonomy that is guided by risk management and transparency, we can achieve levels of efficiency that were once thought impossible.
Newsletter
Stay current in clinical research with Applied Clinical Trials, providing expert insights, regulatory updates, and practical strategies for successful clinical trial design and execution.