Phase III Failures: What Can Be Done?

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-09-01-2005
Volume 0
Issue 0

Much has been written about the staggering costs of drug development and how the low Phase III success rates across the pharma industry have contributed to these costs. While safety outcomes explain many failures during the early development phase and have likewise played a prominent role in some highly publicized product withdrawals, efficacy failures in Phase III have received little attention. What we now know, however, is that a significant number of Phase III failures are attributable neither to issues of safety nor product differentiation, but to an inability to confirm efficacy against placebo.

Much has been written about the staggering costs of drug development and how the low Phase III success rates across the pharma industry have contributed to these costs. While safety outcomes explain many failures during the early development phase and have likewise played a prominent role in some highly publicized product withdrawals, efficacy failures in Phase III have received little attention. What we now know, however, is that a significant number of Phase III failures are attributable neither to issues of safety nor product differentiation, but to an inability to confirm efficacy against placebo.

Mohan Beltangady

What's going on here? If activity compared to placebo and dose–response relationships were established in smaller and less expensive Phase II trials, why are we failing to confirm these results in larger Phase III trials? Are we rushing to initiate such studies before we have sufficiently robust evidence of activity and efficacy in early trials? Are we imprudently expanding our inclusion criteria to a larger patient pool beyond what we established in Phase II? Or are these related to our reluctance to critically look at the Phase II data objectively, perhaps clouded by the optimism about our own drugs?

Reasons for failure

There are several possible reasons for why Phase III trials may fail to confirm what we learned from our Phase II studies. Among these, perhaps the most significant is how we design these late-stage trials. Success in Phase III requires four critical attributes: 1) effective trial planning; 2) objective analysis of existing data; 3) clear assessment and articulation of risks to enable the right decisions; and 4) effective strategies to minimize risk in the event we choose to go forward. Absent any one of these factors, the likelihood of a successful trial outcome declines. Absent all of them, study failure should not be a surprise.

Fortunately, we are not without methods and techniques to address these issues. Consider just one: flexible trial designs. Traditionally, clinical trials have been designed to measure an expected treatment benefit. We design trials, write our protocols. We recruit, randomize, and ultimately treat our subjects according to that protocol. Over the course of a study, we conduct various laboratory and other tests, make a host of observations, and collect and compile reams of data. But not until the study is completed and the blind is broken do we analyze our data to determine if the trial succeeded. What we now know, however, is that in more than 40 percent of the cases, it did not. True enough, for some long-term mortality trials or others involving a serious morbidity, an independent committee generally reviews the interim results periodically and may end a trial for safety reasons or for better than expected efficacy results—using the appropriate statistical guidelines for data interpretation so the false-positive error is still at the 0.05 level. But such committees generally have not viewed the futility or "lost cause" as the reasons to stop studies. Why not? Partly because pharma sponsors have not routinely asked them to do so—for the fear that a potentially great medicine may be eliminated based on premature data from interim looks.

Flexible study designs offer a way out of this bind. They enable us to learn and confirm while we conduct our studies. During Phase II, they allow us to identify and abandon ineffective doses—thereby enabling us to offer future subjects a regime that is more likely to yield a positive outcome. In contrast to Phase II studies, Phase III trials are typically designed with 80%–90% power based on some assumed treatment difference (delta) and variance estimate. For the same delta, depending upon our confidence about the variance, we may nonetheless design a small trial or more often, a large one—just to be safe. But what if our assumptions are wrong? If so, our trial may be too large (and hence more expensive than it need be) or too small (miss that 0.05 p-value) and therefore unlikely to furnish us with a positive result. In either instance, by failing to "right-size" our studies, we squander resources that we could better use elsewhere.

"Right-sizing" our trials during the planning stage is one of our most critical tasks as is making appropriate, well-informed adjustments throughout the course of our trials. Of course, if a sample size revision would render a trial prohibitively large and thus impractical, it would have the same impact as a futility decision. A small number of trials have deployed a "play-the-winner" strategy to randomize each subject based upon observed patient outcomes to provisionally decide the "winner" as the study proceeds in real-time. Several variations on this theme have been explored. In a few cases, information from not only the ongoing study's current cohort is used, but also from previous experience in a more rigorous manner using Bayesian methods. But such methods have not been used extensively by the pharma companies, in part because of the frequentist paradigm, and, in part because of (perceived or real) reluctance by regulators to accept these data as evidence of efficacy.

Regardless, the time has come for the pharma industry to employ many of these tools more aggressively than in the past to enable a more judicious investment of resources where there are winners and not chase after the failing studies and failing drugs.

Mohan Beltangady, PhD, VP and Chief Statistician, Pfizer Global R&D

© 2024 MJH Life Sciences

All rights reserved.