OR WAIT null SECS
How to assess common obstacles for opportunity and create more agile and adaptive clinical trials.
Clinical trials are rife with challenges. Bringing a drug or device successfully to market is complex and multi-tiered and full of potential pitfalls—many that have resulted in effective medications never making it to market.
More than 90% of Phase I clinical trials are unsuccessful.1 By Phase III, the rate improves to a paltry 50% and, still, each of those failures carries with it the weight of all the research that led up to that point—time and cost not to mention patient effort.2
While some clinical trials don’t meet objectives because the drug in question lacks efficacy or has safety concerns, failures are sometimes caused by incorrect assumptions, feasibility complications, operational challenges, or other secondary obstacles. Consequently, in cases where the issue is something other than the pharmaceutical itself, turning around the struggling clinical trial should be of paramount importance.
The growing complexity of studies and stringent regulations—both characteristics critical to balancing innovation and safety—can make mid-stream changes seem almost impossible. There are, however, windows of opportunity where clinical trials in crisis can be rescued leading to new treatments for patients that might otherwise have never made it to market.
As the pharmaceutical industry continues to stretch towards a more agile and adaptive approach post-2020, it is more important than ever that we take a nimble approach to struggling clinical trials, including assessing every obstacle for opportunity.
What clinical trial teams struggle with and how much can be adjusted is heavily dependent on the data that has been collected to date and, therefore, the study timeline which is broken down into four stages:
The planning stage before a study begins usually takes six months or longer and is focused primarily on the study design, protocol development, and study initiation, so issues that arise are often related to either operational challenges or new information to the published research landscape. Since enrollment has not started yet, it is relatively easy to pivot or make changes to the design of the clinical trial when issues are discovered.
Feasibility can be stumbling block at this stage. IRBs may challenge a protocol during early discussions forcing a sponsor to make amendments. Or sites may flag certain tasks or as not feasible as designed.
For example, in a clinical study to investigate an immunosuppressive drug, monitoring the blood pressure is important to ensure patients’ safety. The protocol for this particular clinical trial instructed the nurse to repeat the blood pressure measurements three times with a five-minute break between any two measurements. The protocol also stated the difference between the three measurements should be within 15 mmHg and that the average of the three measurements should be used as the blood pressure value for the visit.
During the site initiation, this procedure was tested at several locations. About 25% of the participants had blood pressure measurements that fell outside the acceptable range of 15 mmHg. The observed cases were carefully evaluated, and it was determined that the range of 15 mmHg was too tight. The protocol was revised before the study had started, which prevented what would have likely been many non-compliances to the protocol.
After a protocol has been finalized to reach a particular target, study teams may also discover there has been new research published that impacts the clinical trial design or makes a study less meaningful. Oftentimes, the development program is stopped, and the resources are invested into other priorities. In some cases, however, such a development in the published research is an opportunity to take a fresh look at the assumptions, target endpoint and protocol.
A new drug was recently under development to treat Alzheimer's disease. To speed up the development process, the Phase III protocol was created while the proof-of-concept study was not yet complete and was based on an assumed treatment effect of 40% against placebo. Meanwhile, a competitor released a similar product with a treatment effect of a little below 40% against placebo. When the proof-of-concept study was completed, the observed treatment effect was over 35%.
In this case, after taking all the information into the consideration, the study protocol was revised to a non-inferiority study with the competitor’s product as the active control. This revision required no placebo group which, on its own, benefited the Alzheimer patient participants, and enabled the study team to confirm the treatment effect of the investigational drug.
When new information arises at this stage, study team should reestablish what is meaningful considering the new information and whether or not the data to date supports an adjusted plan.
Once a study has begun, it is relatively more difficult to address issues within the study design without potentially introducing operational biases. However, evaluating the assumptions on which a study design is based at key time points during the clinical trial can result in a more responsive, resilient study that is less prone to operational failure.
Every clinical trial relies on assumptions based on historical information of the disease, indication, and/or data from previous clinical trials. The question to ask is how close to the true situations these assumptions are. Performing interim analysis at key time points allows study teams to validate or verify the assumptions made at the beginning so appropriate actions can be taken to mitigate potential risks.
If the deviation between expectation and reality is too large, the study team can consider whether pivoting and making protocol adjustments is worthwhile. Or, if it makes more sense to terminate early and save resources for more viable projects. If the interim results are very good, an NDA could even be submitted sooner to get drug approval—and treatments to patients—faster.
Take, for example, a study that has inclusion/exclusion criteria specified for a particular patient population but is struggling with very slow recruitment. From an operational standpoint, it makes sense to loosen the inclusion/exclusion criteria to get more people in a shorter amount of time. Such a change, though operational, can have a significant scientific impact.
When the inclusion/exclusion criteria are changed in the middle of the study, it creates two different patient populations—before and after the change—which can have a ripple effect on the data. The important question to answer is whether the data from the revised patient population will still enable the study team to establish objective efficacy with the original endpoint?
In other words, is the new population after the change close enough to the original population so that, when pooled for analysis, the study team is confident that the conclusion from pooled data will reflect or very close to the true situation? If, for any reason, the study team is not able to determine whether the two populations are close enough from clinical point of view, and no historical data or literatures are available to make the assessment, an interim analysis may be introduced to evaluate the similarity of the two populations. This kind of analysis can reduce the likelihood of insufficient data down the line.
With careful planning, an interim analysis can be a power tool to verify key assumptions, evaluate operational changes, and adjust the study design accordingly using enrichment design or other such approaches.3
It is not uncommon for health authorities to challenge a study once it is finished. When a study is already completed—data collected, and analysis finished—there are far fewer actions that can be taken if the original study is unable to demonstrate sufficient treatment effect or provide satisfactory answers to health authority questions. A possible solution is often to conduct a brand-new study which can cost millions of dollars and take years before the required insight is attained.
Rather than starting from scratch, sponsors can consider whether there are creative approaches to follow-on studies, such as using real-world-evidence data and subgroup analysis to provide additional supportive evidence and enhance the conclusions from the original clinical trial.
For example, Pfizer developed a COVID-19 vaccine and submitted to the FDA for emergency approval before generating the long-term safety data. When at the FDA Advisory Committee Meeting for the emergency approval, four members voted “no” due to insufficient data in the 16-- to 18-year-old age group. Although the vaccine wasapproved for emergency use, the question of whether the vaccine was efficacious for 16- to 18 -year-olds still needed to be answered.
A straightforward solution would be to conduct another study in that age group—which would likely take a year or more. As an alternative approach, the real-world evidence data for the Pfizer vaccine collected from Israel (the first country to reach an 80% fully vaccinated rate) could be used as supportive data to demonstrate the vaccine is helpful for 16- to 24-year-olds.4(See the figure below.)
Collection approaches for the real-world evidence data are usually not randomized or well-controlled, so it isn’t always easy to identify the information with suitable characteristics, but real-world evidence data sources are broader and usually contain a larger volume of information than clinical trial data.
In some cases, especially if a pivotal study is at risk of failure, the benefit of analyzing real-world data is worth the difficulty.
Sometimes, a clinical trial just doesn’t make it no matter how well designed or how agile the study team. Even if a clinical trial fails to support the original objective, there are still opportunities to find value in the questions asked, data collected, or assumptions disproven.
Resourceful study teams also use the publication phase as a beginning rather than an end, assessing the data for the next opportunity. What else can you do with the data? Did the clinical trial reveal the drug could be of use for a subgroup if not the whole? What about other indications? Are there trends that suggest a natural next step or new directions for the product?
While nothing at this stage would save the original study, there is sometimes a way to use it as a foundation, building and improving upon what has already been completed. Publishing the results of a failed study takes time but can provide crucial insight for both the sponsor and the common good.
Today’s clinical trials are incredibly complex and study teams--especially those working on innovative or ground-breaking trials—must rely on scientifically-based assumptions during the planning stage of the studies. Consequently, unexpected challenges often arise. Careful planning before the start of trials can largely improve the smoothness of study conduct but doesn’t exclude the need to continually test assumptions and make the adjustments necessary to reach a meaningful conclusion.
When developing the study protocol, it is helpful to evaluate the assumptions with the latest information, asking:
Once the trial is ongoing, carefully evaluate any possible actions to change the study design and assess the risks involved in such changes. Finally, real-world evidence data can be considered as a supplement data source to design and/or support a clinical trial when appropriate.
Ultimately, the goal is much bigger than rescuing a struggling clinical trial. The goal is to develop solutions to medical problems as quickly, efficiently, and completely as possible. To do so, study teams must be willing to take a strategic and nimble approach that includes assessment, enrichment, and solutions-based modifications as appropriate.
DJ Tang, PhD, is the Senior Vice President of Data Services for Firma Clinical