Glimpsing the Future of Clinical Trial Design


Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-06-01-2019
Volume 28
Issue 6

A look at three contemporary trends that though integrated cautiously at first, may open up a reimagined world of clinical research.

Randomized controlled trials (RCTs) have long been held as the gold standard in medical research-and for good reason. By creating a rigid study design and splitting patients up into “treatment” and “control” arms, RCTs are widely accepted as the best way to determine whether an experimental new treatment is indeed better than the standard of care.

But RCTs have one major drawback: they’re expensive, time-consuming and labor-intensive. To make matters worse, once the trial is finished and the results are tabulated, the entire trial operation is often shuttered, meaning much of the data that was collected will never be used again.

We live in a data-rich world. Outside healthcare, other industries successfully leverage large datasets and advanced computer science techniques such as machine learning to streamline business operations. But within the clinical research industry, trial design has not been greatly enhanced by the trappings of modern computational technology.

There are signs that the industry is ready for change, with three different trends converging on the clinical research landscape that promise to slowly improve upon clinical trial design without necessarily undermining the sanctity of the RCT gold standard. The first of these trends is known as a “common protocol template,” or the concept of using one streamlined protocol that is then adapted to specific research programs. Second, is a concept called “quantitative” trial design, which leverages predictive analytics to streamline the planning, execution, and operational side of clinical research efforts.

Third, is the concept of creating a synthetic placebo arms, or using existing patient data instead of enrolling new patients into the placebo arm of a randomized trial and then giving them a dummy drug. These new approaches are not without their detractors, but most have received tactic if not fully-throated support from the FDA.

In their own ways, all three concepts-the common protocol, quantitative design, and synthetic placebo arms-are attempting to leverage modern computational firepower to make clinical trials cheaper for sponsors, safer for patients, and better for public health.

These three concepts “are solutions to the general problem and challenge [of] getting the scientific process to become more efficient,” says David Lee, chief data officer at Medidata Solutions. The idea, Lee adds, is to get results that are “as accurate as possible, as fast as possible, and in the least risky way possible.”

Today, these new trial concepts are rolling out cautiously in low-stakes settings. But in the future, they may lead to a reimagined world of clinical research.

Laying a foundational framework with a common protocol template

A study protocol is basically a roadmap for each clinical trial. Over time, pharmaceutical sponsors have developed their own proprietary study protocols, each of which varies from sponsor to sponsor. Meanwhile, protocols have grown more complex as clinical research questions have become more exacting and precise, driving up the cost and time of conducting a clinical trial.

The complexity and diversity of these protocols creates headaches for study sites, institutional review boards (IRBs), and regulators who are asked to read and interpret a dizzying array of study protocols from various sponsors.

Enter the concept of a common protocol, or a single streamlined template that is consistent across study sponsors. “The reason a [common protocol] was needed is every sponsor of research writes their protocols differently,” says Vivian Combs, advisor/process owner of clinical information and process automation at Eli Lilly and Company, and also the lead of the Common Protocol Template Initiative at TransCelerate BioPharma, a non-profit collaboration of more than 19 biopharmaceutical companies.

Historically, says Combs, “there has not been very much clear guidance around what an ideal protocol looks like.” But over the past decade, pharmaceutical companies (represented by TransCelerate), the FDA, and the National Institutes of Health (NIH) all realized the need to straighten up the protocol mess, and began to separately develop streamlined protocol templates.

By 2017, all three groups-TransCelerate, FDA, and NIH-teamed up to harmonize their respective templates. Today, the respective templates have been finalized, with each slightly fine-tuned for specific research purposes (for example, the NIH template is geared toward investigators who receive government grants and funding, while TransCelerate’s template is designed for pharmaceutical sponsors).

Since it was launched in 2017, TransCelerate’s Common Protocol Template (CPT) has been downloaded over 7,600 times, primarily by biopharmaceutical companies, but also by a mix of government agencies, major cancer centers, and independent researchers.

“TransCelerate is certainly leading the charge here,” says Tom Lehmann, managing director, Accenture Life Sciences. “It’s certainly very beneficial for the industry to create some operational consistency, particularly as you’re moving into more data sharing.”

When it comes to data sharing, there was plenty of room for improvement. In the 1980s and ‘90s, study protocols were written on paper, but even after pharma sponsors ditched paper and pencil in favor of computers, protocols were formatted in a way that made it difficult to share information. For example, data entered in a study protocol at one site is often manually re-entered at multiple points throughout the life cycle of a clinical trial-a process ripe with errors. Meanwhile, any changes made to the master protocol must be communicated to the study sites in a time-consuming and manual process.

The increasing complexity of protocols makes these problems worse, with a 2016 analysis by the Tufts Center for the Study of Drug Development (CSDD) finding that among protocols that had a substantial change during a trial, 23% were completely avoidable, with some due to human error. Protocol non-compliance-in which sites don’t follow the protocol exactly-has also grown rapidly over the past decade, and now accounts for 46% of all site deficiencies, according to research published by Tufts in 2015.

But thanks to advances in computer programming, it’s now possible to rely on computers to auto-populate fields across different systems and store those fields to be used for different purposes later on.

“If a machine could ‘read’ the protocol,” explains Combs, “a human wouldn’t have to do the work of structuring the information and feeding it into the next downstream system.”

“There are a dozen or more downstream processes that are waiting for that information,” says Combs. These include case report forms (CRFs), grant applications, statistical analysis plans, and clinical trial registries. With a truly digital protocol, these downstream systems could be automatically updated, much like an automatic software update, rather than manually re-entered by research staff.

Still, there is some resistance to the widespread adoption of TransCelerate’s Common Protocol Template-most having to do with industry inertia, says Combs.

Traditionally, she says, “the protocol has been, within companies, a place to capture that institutional knowledge. It has become a holding ground for those kind of lessons learned.”

“I think everybody understands the value of having a common template, but it’s hard to let those things go,” adds Combs.

Looking to the future, Combs says that common protocols will facilitate data sharing between companies-encouraging greater collaboration among pharma companies and reducing unnecessary clinical research costs.

According to Stuart J. Pocock, professor of medical statistics at the London School of Hygiene and Tropical Medicine, it’s a good idea for pharmaceutical companies to collaborate on a common template.

“This would enhance the field,” he says, “and at the end of the day, you can combine the results more reliably in a meta-analysis.” Currently, conducting a meta-analysis is a difficult and time-consuming endeavor because biostatisticians must pull data from a myriad of different studies that were all executed on a diverse array of protocols. In the future, when more research is conducted under a common protocol, meta-analysis will become more accurate and reliable, experts predict.

“The possibilities are almost mind-boggling,” says Combs.

Optimizing operations and design criteria with quantitative design

The concept of quantitative design “means many things to many people,” says Lee. But at its core, quantitative design employs recent advancements in data management, computer modeling, and predictive analytics to improve a trial’s operational efficiency and chance of success.

Quantitative design employs a “data-driven approach” to determining the best path forward in a drug development program, says Venkat Sethuraman, associate principal and the global clinical lead within ZS Associates’ R&D excellence practice.

It’s like Google Maps, explains Sethuraman. When navigating using Google Maps, “Google tells you there are four ways for you to get to a particular destination,” he says. Once you see all the options laid out, “you can choose which direction you want.”

Today, through machine-assisted learning, computer algorithms can look at massive amounts of historical clinical trial data. Then, that algorithm can predict-with a high level of accuracy-which trial design features will take the longest, which will be the most expensive, and which are likely to result in trial success.

That means, instead of generating a trial design based on the experience of a company’s most senior clinical research scientist, studies would be designed by looking at data on what has worked in the past, and using that data to make predictions about the future.

For example, says Lee, one of the most fundamental questions when planning a new clinical research study is that of statistical power. Chiefly, how many patients need to be enrolled so that the trial can achieve statistical significance? “That is a calculation that is based on assumptions that could be informed quantitatively, or through data,” says Lee. “But often times that’s done in a more informal way,” based on the experience of the lead scientist, he notes.

Another application could be the use of quantitative analysis to determine which cancer indication to pick first when designing the clinical research program of a new experimental oncology therapy. For example, is the new candidate more likely to succeed in melanoma or lung cancer? Today, those decisions are “primarily based on what the physician thinks,” says Sethuraman. “It’s not very data-driven. I think there’s a huge opportunity for machine assistance.”

And with so many massive Phase III trial failures in recent years, Lehmann says there has been “a big push” to “do a better job of feasibility assessment.”

“There also seems to be a shift on the predictive part on the operational side,” says Lehmann. “People are thinking to use their operational data to not only manage the trial but also to [ask], ‘What sites are likely to enroll on time? At what point in the trial do I want to have an intervention because I want to keep things on track?’”

It’s still early days for quantitative trial design, with companies like Medidata and ZS offering their clients tools and apps to strengthen and streamline studies from the very first design decisions.

“Civilization is increasingly driven by models and data, so I think this theme is being adapted into the clinical trial space as well,” says Lee. “How do we do what Netflix or Amazon has done on the consumer side?”

“The technology is there, says Lehmann, but  “the question is: is it actually changing the way [companies] operate, or are they still relying on instincts?”

Enhancing an ethical obligation to patients with a synthetic placebo

Every year, thousands of hopeful patients enroll in clinical research studies, only to be given a placebo intervention. This feature of randomized trials-the placebo control arm-is necessary to determine whether an intervention has a real effect on patient outcomes.

But placebo-controlled trials have long created hang-ups in clinical research. Most patients would rather take an active drug than a placebo. In addition, there are ethical implications to consider when giving a placebo treatment to a patient with a potentially deadly disease.

Over the past several decades, pharmaceutical sponsors have collected an unprecedented amount of placebo-controlled data. This data indicates how different types of patients-from cancer patients to diabetes patients to psoriasis patients-fare while taking nothing more than a sugar pill.

Today the industry is hoping to leverage that historical data, comparing those outcomes against the experimental therapy in lieu of a placebo-control arm of a trial. In fact, in 2017, researchers used Medidata’s de-identified database of 3,000+ trials to create a “synthetic control arm” for a Phase I/II single-arm trial in acute myeloid leukemia (AML).

Although study authors noted that synthetic controls are “not ideal,” they also pointed out that they may be “much more efficient and economical” than traditional placebo-controlled studies. They concluded that using synthetic control arms to predict which patients will respond “can help build more efficient and more informative adaptive clinical trials.”

In addition to Medidata’s efforts, TransCelerate is also working on an initiative to maximize the value of existing placebo and standard-of-care data through an initiative called DataCelerate. According to TransCelerate, this initiative could improve trial design, improve researcher’s understanding of specific patient populations, and streamline trial execution.

And in December 2018, then-FDA Commissioner Scott Gottlieb indicated the agency’s willingness to explore the use of “real-world” electronic medical record (EMR) data in the context of clinical research.

Unlike data from the placebo arm of prior clinical trials, so-called real-world data (RWD) is typically de-identified patient data gleaned from the EMR. RWD tends to be messier than highly-sanitized clinical trial data. Nevertheless, there is so much RWD currently sitting unused in EMR systems that researchers are working out how to build computer analytics models that can safely interpret and leverage that data.

As Gottlieb wrote in his statement last year, “these opportunities are already being recognized. In the oncology setting, for example, we currently have new drug applications under review where [RWD] and [real-world evidence] are helping to inform our ongoing evaluation as one component of the total complement of information that we’re evaluating. This is especially relevant when it comes to the evaluation of treatments for uncommon conditions, such as very rare tumor types.”

Lee adds that the evaluation of synthetic data or RWD is something the FDA is growing increasingly willing to consider, particularly in oncology and other high-stakes settings.

Lehmann, meanwhile, notes that the use of RWD could also be extremely helpful in common diseases, such as cardiovascular disease, diabetes, and other indications “where there is already a significant dataset that is out there to use as a comparison.” Ultimately, Lehmann says, “it comes down to the ability for the biostatistician and the sponsor to have trust in the data and believe it is a relevant comparison to their therapy.”

Not everyone shares this optimistic outlook when it comes to synthetic and real-world trial data. “You can’t replace randomization,” says Pocock. “In order to get a fair comparison of two treatments, you need to randomize patients one to another. Comparative effectiveness via big data is a bit of a con.”

But synthetic control arms might come into play for pharma sponsors during very early stage trials, long before a regulatory decision needs to be made, says Lee. “At the end of [Phase I] trials, you need a go/no-go decision,” says Lee. “But if you don’t have a control arm, it’s kind of hard to assess whether or not you think the new drug will perform better than standard of care. The internal go/no-go decisions can be greatly informed by the use of a synthetic control arm.”

“The key is, the pendulum cannot go completely to the other end,” says Sethuraman. “The RCT cannot be completely replaced.”


Sony Salzman is freelance journalist who specializes in health and medical innovation. She can be reached at


© 2024 MJH Life Sciences

All rights reserved.