OR WAIT null SECS
Industry set to make choice on whether COVID-19 trials are the new standard.
Our apologies to those who still wince when reminded of the following figures.
But because these data illustrate successes, or lack thereof, of clinical trials from the past, they serve as an outcomes baseline for the future: Will future outcomes be based on the types of protocols used in the COVID-19 vaccine trials, or will results be a carbon copy of pre-2020 study figures?
Let’s get past the wincing first.
Between 1998 and 2015, Hwang et al. found that of 640 novel therapies, 54%, or 344 therapies, failed in development; the FDA approved 36%, or 230. Most failed because they didn’t reach their endpoints, while 17% were pulled for safety reasons. The remainder were abandoned for unexplained commercial issues. The common denominators among the approved therapies: orphan designations and mega-sized company sponsors. Only 10% were approved in other countries.1
Postapproval figures are also worthy of note: Of 222 novel therapies approved between 2001 and 2010, 123 of them were named as the cause of a postmarket safety event; three were eliminated from the market, 61 received boxed warnings, and the rest were the subject of an FDA safety communication.2
Techniques that were demonstrated in COVID trials, many of which are decades old, could help stop the wincing. By using different trial designs, such as master protocols; using biomarkers on a regular basis to ensure accurate populations; adopting a new mindset about the purposes behind Phase I and Phase II trials; and giving investigators the time and tools to learn new skills—a shift could happen, said those interviewed.
For example, the Recovery trial, the University of Oxford’s mammoth intent-to-treat COVID-19 study, has various stakeholders, including the Gates Foundation and the Wellcome Trust. Designed for flexibility so investigators can make real-time decisions once an arm produces data, be it thumbs up or down, the protocol puts no restrictions on enrollee numbers or their ages. When dexamethasone proved helpful, it became a standard of care. When the opposite proved true of hydroxychloroquine, that trial arm was dropped. “There was a rolling review,” said Brigid Flanagan, founder of Oriel Research Services, Ltd. “There is no reason it can’t be done going forward.”
Different stakeholders have different definitions of success, said Bill Barry, PhD, senior statistical research scientist, Rho Inc. They will all approach the trial design differently—intent to treat? Delineate which subset will see the best treatment results? To make an umbrella or basket trial work, they need one strategy to rule them all. “Reaching consensus is the challenge.”
So while the technology is willing, the human buy-in could be termed weak. Humans tend to act unscientifically on occasion—like extending a trial whose drug has no future because of heavy emotional investment. That is not evidence-based decision making, said Mike Rea, founder and CEO of Protodigm, which advocates using early trial phases for exploring what the molecule or therapy can be used for, and in what diseases, including subsets. “If you’re just collecting information to say yes,” the trial will ultimately fail, he said. “It happens a lot.”
Trials todsy are more tailored than necessary, said Ken Getz, MBA, founder and board chair, CISCRP and director and professor, Tufts Center for the Study of Drug Development, Tufts University School of Medicine. “We see trials so highly customized, introducing approaches that are unique. [But] when you try and measure at the macro level, you see variation.”
Shorter trial times. Improved patient safety. Less bias, fewer ethical concerns. More risk modification. Especially if biomarkers are involved.
In traditionally run trials, specimens go to a lab and investigators wait for the results. Today, adaptive design allows investigators to add, drop or expand arms, depending on the data, all in real time. All these, said Barry, speed up the trial’s progress. “Biomarkers give incredible new information. Once you incorporate a biomarker into a study, there is less risk of bias.” Adaptive is also considered more ethical by some because patients have better odds of getting effective treatment, quickening pace of treatment discovery.3
And it speeds up finding safety issues. “In general, by being able to make changes while the study is underway, patients with safety issues can be discontinued in that cohort, or moved to another group, so it exposes them to less risk because it’s all done in real time. And because the adaptation strategies were approved before the trial started, there is no waiting for additional regulatory approvals,” said Getz.
The Stampede trial, which is evaluating eight treatments over 15 years for locally advanced or metastatic prostate cancer, would—in separate two-arm trials—have taken more than 40 years to complete, according to investigators at the University of Oxford. Progression between phases is reportedly glass-smooth. It started with six arms, but has since dropped two, and added a further three. Wrote one researcher, had these therapies been tested traditionally, we “would simply never [have been] cost-effective for funding outside the program.”4
Serial assessments can expand hypothesis, from just does it work for patient Y, to a more complex understanding. You can ask more complex questions, Rea said.
Early phase trials, said Rea, should be exploratory; learn what the drug or molecule can do, do not try to force data. The real risk, he said, is in missing opportunity because the molecule or drug wasn’t asked the right questions. Success involves preparation—determining which molecular or therapeutic pathways seem viable, and then making decisions as the data come in. “Our biggest observation is that there is not enough observation in Phase I.”
But some hesitate on costs. Said Rea: Adaptive trials shouldn’t cost more money “if you do the characterization early, it actually saves money because you’re not doing the failed phase.” A well-designed plan should also lead to easier recruitment.
Traditional trials that become overburdened with complexity, are more customized, which also lengthens the process. Said Getz, “[Customization] is the enemy of speed and efficiency.” Getz says the longer adaptive trials stay out of the mainstream, database lock cycles will continue expanding. Trials now have at least three data sources, adding an average of five days to their database lock. Getz has written the number of days will increase without infrastructure improvements, at the least.
An advantage of biomarkers is that the baseline biomarker could be predictive, said Barry. “A blood draw can measure how enrollees are reacting to the drug. Blood draws are cheaper, and the readout can happen quickly, and consequently, so can decisions to change strategies. That is a key element.”
Assays can be expensive, but they continue to go down in price, Barry said. The potential gains are more efficient, smaller trials. “Doing an enrichment study on only biomarker-positive people can be much cheaper than testing in a full population,” Barry said.
There is another issue with consistent biomarker use, said Flanagan. Patient accessibility. Most patients do not go to academic health centers. If they have no transportation, they will go to a local physician who likely won’t explore a biomarker-based treatment. “I think people are pressed for time. In academic centers, researchers have assistants who can look at biomarkers.”
That is, if they can be gathered relatively easily. The known Alzheimer biomarkers, Aβ1-₄₂, T-tau, and P-tau-181 appear in cerebrospinal fluid, not an easy or painless draw. And PET scans, said Laurie Ryan, PhD, chief of the Clinical Interventions and Diagnostics Branch in the Division of Neuroscience at the NIA, are expensive. She and her peers are awaiting approval of plasma markers, which will make screening participants a lot quicker, in a trial and in the clinic. “We are heading in that direction…biomarkers are hugely important.”
That written, the NIA is using adaptive design in other areas. It is funding pilot studies with embedded pragmatic studies. But pragmatic studies have different roadblocks. Because these trials are based on wherever the data are gathered—patient’s home, doctor’s office, healthcare system—the reliance on EMRs is often frustrating.
This barrier of EMRs said Flanagan, is they weren’t originally designed to store patient data destined for clinical trials, they were created for billing purposes, so the data aren’t clean all
Federal regulatory and research institutions, here and abroad, have long been advocates, and often the underwriters, of trials with master protocol designs. In the early 2010s, the NCI and the UK’s National Institute for Health Research were funding a handful of trials.5 In the UK, the first master protocol approach—using multiple treatments against one control arm—was Stampede, which started recruiting in the mid- 2000s. The first basket trial that made the history books came in 2001 with the approval of Gleevec (imatinib).
Just recently, the FDA published its final guidance for master protocol-designed trials for COVID-19 treatments.6
Stakeholders have at least one toe in the water. A 2019 review found that of 5,689 trials, dating to the early 2000s, 83 had umbrella, basket and platform designs. What was significant, the authors said, was that half of the trials had begun in the five years prior to the study.7
During the early days of the pandemic, the NIH launched RADx, or Rapid Acceleration of Diagnostics, which propelled the COVID-19-testing collaborations between industry and regulators. By July 13, 2020, RADx received more than 600 applications. Within two months post-launch, 27 projects were in Phase I. As of May 2021, RADx was working with 33 companies.8
The NIH is proposing creation of ARPA-H (Advanced Research Projects Agency for Health), similar to the RADx model. NIH Director Francis Collins spoke before the Senate Subcommittee on Health and Human Services in late May about the proposed $6.5 billion program. He said it will have project managers who partner with private businesses, big and small, and then “ride herd over these things carefully so that if they’re not doing well, they get basically stopped immediately.” It’s a high-risk project, he said, failures are likely, but the goal is to “identify the areas of greatest opportunity.”9
For Rea, linear trial design is nearly pointless. “The high failure rate that we’re seeing is because of the model which sees this as attrition. But the industry’s OK with it because some things make it through.”
Rea used the history of Cymbalta (duloxetine) to illustrate why repurposing Phase I and II is necessary. The FDA approved duloxetine in 2004 for treatment of depression, and has since approved six other indications, including diabetes-related neuropathic pain. Some 270-plus studies—from recruitment to completion—are listed in clinicaltrials.gov for many possible indications.
These approvals and new studies, said Rea, likely wouldn’t have come about if Eli Lilly, maker of duloxetine, had gone the traditional route, trial-design wise. Lilly looked at duloxetine “very broadly.”
In the beginning there were clues, he said, that this serotonin and norepinephrine reuptake inhibitor mitigated pain. “The interesting thing they did in Phase I/Phase II was to see what that effect in pain looked like.” The subsequent trial examined whether duloxetine worked in depression-related pain and non-depression-related pain. The results led to a Phase III on pain, which in a “traditional model would not have happened.” The endpoint would have been effect on depression, and nothing more.
“That kind of behavior is exquisite, because you’re using an early stage for what it is good for...finding out where the drug works.” Using the word “failure” he said, is counterproductive.
Flanagan said if trial investigators can get the data they need regarding safety and efficacy, they don’t need to do large Phase IIIs; Barry said the HER-2 positive trials are classic examples of not needing large Phase III trials.
Researchers new to using adaptive design should not wade into these waters without comprehensive instruction. “Knowing when and when not to adapt a trial, how to strategically plan for logistical challenges and how to conduct complicated interpretations are skills of the expert biostatistician, not the novice.”10
Wrote Burnett et al.: Adaptive designs are not mainstream for many reasons, including the lack of expertise and experience among clinicians, trialists, and statisticians; design and analysis software; time to plan and analyze; and preferring the known to the unknown. The authors said they believe that investigators “aren’t clear as to what they are, what can be learned and how to apply them.”11
The NIA, and others, are helping clear up these mysteries.
In 2020, the NIA began supporting a two-tiered program to train those who are needed to run these types of trials successfully. The programs, run out of University of Southern California, include instruction on diversity, both for the trial team and the enrollees, said Ryan. If study staff doesn’t represent the trial’s participants, it can impact enrollment, she said. The second tier of the program is for those current trial professionals who need to learn about biostatistics and other subjects. There is a whole range of professionals that make adaptive work, she said. The course is for graduate students, PhDs, MDs—the whole range of professionals.
“Medical and graduate schools, by and large, don’t offer this type of training,” Ryan said, an important reason why the NIA supports it. “This isn’t standard school training.”
Christine Bahls, Freelance Writer and Researcher