Commentary|Articles|January 15, 2026

Applied Clinical Trials

  • Applied Clinical Trials-02-01-2026
  • Volume 35
  • Issue 1

Extending Trial Registries for an AI-Driven Era of Patient Discovery

Listen
0:00 / 0:00

As AI-driven search becomes the primary way patients discover research opportunities, the quality and structure of clinical trial registry data will determine whether transparency translates into real, equitable access.

Registries in an AI-mediated world

  • Over 450,000 studies are listed in public registries, forming the authoritative source of trial truth
  • Registries ensure compliance and transparency but were not designed for patient discovery
  • AI can translate registry data into plain language only if records are accurate and current
  • Outdated site contacts and technical listings create equity and feasibility risks
  • Treating registries as living infrastructure enables discovery without altering the regulatory record

Clinical trial registries have quietly done something remarkable for our industry. Two decades ago, most patients, and many clinicians, had no reliable way of knowing what studies were underway, let alone how to engage with them. The emergence of public registries such as ClinicalTrials.gov, reinforced by ICMJE and regulatory expectations, changed that. Today, more than 450,000 clinical studies are registered across global public registries worldwide, forming a shared foundation of transparency and accountability across academic and commercial research alike.1,2

What has shifted more dramatically since then is not the registry landscape, but the way people look for information.

Search engines, social platforms, and now large language models have become the front door to health knowledge. Patients no longer arrive via a referral letter or advocacy newsletter alone. They search. They ask questions in plain language. They expect answers that make sense to them, quickly. This is not a reason to dismantle registry infrastructure. It is a reason to extend it.

Public registries were never designed as discovery tools. They were designed to declare that a study exists, what it intends to test, and who is responsible for it. That compliance function remains essential and non-negotiable. But it is not the only public good that registries can serve.

Most registry entries are written under tight formatting constraints, optimized for regulators and auditors rather than for patients or community clinicians. Eligibility criteria are technical by necessity. Site contact details are often generic or become outdated as studies evolve. In practice, this means the information is there, but difficult to act on. Interest stalls before it turns into a conversation.

This is where generative artificial intelligence (AI) changes the stakes. Large language models are exceptionally good at translating, summarizing, and connecting information across sources. Used well, they can turn structured trial data into something understandable and relevant. Used poorly, they will infer, approximate, and fill gaps with confidence. The difference lies almost entirely in the quality, structure, and currency of the underlying data.

It is helpful to think of trial registries as operating systems. They provide stability, authority, and a single source of truth. What they lack, by design, is flexibility. That flexibility can come from modular, interoperable plug-in services layered on top of registries rather than built in place of them.

We already see early versions of this across the ecosystem, including tools that generate plain-language summaries from protocol data, systems that verify site contact details against institutional directories, and interfaces that combine registry listings with geolocation, language translation, or privacy-preserving pre-screening. Individually, these tools solve narrow problems. Together, they turn static listings into living pathways between people and research teams. Importantly, they do so without altering the regulatory record itself, all within existing registry governance frameworks.3

AI-enabled search is rapidly becoming the default discovery interface for clinical research. When someone asks, “Is there a new Alzheimer’s study near me?”, they are not asking for a registry identifier. They are asking whether something real, relevant, and reachable exists. Whether the answer they receive is accurate depends on how well our public data are maintained and how responsibly they are extended.

Poor data quality does more than reduce visibility. It introduces downstream error. For rare diseases, regional studies, or trials outside major academic centers, that error compounds quickly. This reframes registry hygiene as an equity issue. Clear, machine-readable, and up-to-date records allow AI systems to surface opportunities that would otherwise remain invisible, rather than defaulting to the largest or loudest studies online.4

For sponsors and operations teams, this is not about marketing reach. It is about feasibility realism. Eligibility does not equal intent. Intent does not equal contact. If an inquiry is routed to a dormant inbox or a central sponsor address with no clear owner, momentum is lost before a study team ever knows it existed. Recruitment projections that ignore contactability tend to over-promise upstream and under-resource sites downstream.

Treating registry data as living infrastructure changes that dynamic. Verified site contacts, realistic descriptions of participation burden, and simple capacity controls help align public interest with what sites can genuinely support. The result is fewer dead ends, better-informed inquiries, and more honest feasibility planning.

None of this requires new regulation. It requires shared attention. Practical steps include adopting interoperable data standards that allow read-only access by verified partners, prioritizing routine validation of site contact details, supporting non-technical summaries that can be reused across approved downstream tools, and encouraging AI platforms to reference authoritative registry data rather than third-party marketing content. Just as importantly, it requires measuring whether public listings lead to meaningful connections, not simply whether disclosure boxes have been ticked.

Clinical trial registries remain one of the most important trust instruments our industry has built. In an AI-mediated world, their role can expand from transparency alone to true accessibility. Patients will continue to search. AI systems will continue to interpret. The question is whether the information they find will be accurate, equitable, and connected to teams ready to respond.

Extending registries thoughtfully allows us to meet that moment, without losing what made them valuable in the first place.

Hugo Stephenson, MD, is Co-Founder and Executive Chair of TrialScreen and a regular contributor to Applied Clinical Trials

References
  1. World Health Organization. International Clinical Trials Registry Platform (ICTRP). Global trial registration data and search portal.
  2. International Committee of Medical Journal Editors. Clinical trial registration requirements.
  3. World Health Organization. International standards for clinical trial registries (Version 3.0).
  4. U.S. Food and Drug Administration. FDA Amendments Act (FDAAA) 801 and clinical trial transparency guidance.

Newsletter

Stay current in clinical research with Applied Clinical Trials, providing expert insights, regulatory updates, and practical strategies for successful clinical trial design and execution.