OR WAIT 15 SECS
Making adaptations to clinical trials in the early stages of research with the use of interim data.
The title of this article comes from a presentation by Professor Andy Grieve at the adaptive designs workshop sponsored by PhRMA in 2006.1 The question is whether the current high interest in adaptive designs is similar to that in hula-hoop during the summer of 1958. Hula-hoop essentially became yesterday's flavor when interest in the hoop plummeted in the fall of that same year.
Photography: Jim Shive, Illustrator: Paul Belci
By adaptive designs, we mean designs that use accumulating data to modify certain aspects of the trials.2 Potential modifications include terminating a trial early for efficacy or for futility, dropping or adding a dose or doses, changing the size of the trial, changing the ratio by which future subjects are randomized to the various treatment groups, and determining which treatment the next subject will receive.
For members of the pharmaceutical industry, the rationale behind adaptive designs is quite simple. Research and development of a pharmaceutical product is a continuous process. Results from one clinical trial guide the next step in a product's development program. We terminate the development if the results are unfavorable; we speed up the development if the results are exciting. If we adjust to new information and adapt after a trial, then why don't we adapt a trial based on interim data? Some have suggested that effective use of information in this manner could reduce the late stage failure rate.3
For all the flexibility offered by adaptive designs, they are not a substitute for poor planning. On the contrary, adaptive designs require much more upfront planning. In this article, we will concentrate on a few select adaptive designs. We will also discuss briefly current regulatory positions on such designs, as we are aware of them. We believe that with careful planning, rigorous execution, and collective experience of the scientific community, adaptive trials will not fade like the hula-hoop, but will become increasingly common in evaluating medical interventions in the next decade.
The need to monitor trials and make decisions based on interim data was recognized as early as in the 1960's with the NIH-funded UGDP trial.4 Subsequently, the Greenberg Report,5 issued in 1967, marked the beginning of research and adoption of group sequential designs (GSDs).
Under a traditional GSD, decisions such as continuing the trial as planned, declaring the efficacy of one treatment or stopping the trial for futility are made based on interim data. When multiple interim looks are made, naïve analyses could lead to increased risks of drawing erroneous conclusions. Therefore, one needs to employ special statistical procedures to control for such risks. A common approach is to choose a strategy on how the allowable risks (typically 5% for false positive and 10% for false negative in a confirmatory trial) will be spent over the multiple decision points.
Because decisions will be made based on interim results, it is often necessary to charge a Data Monitoring Committee (DMC) with this responsibility. Nowadays, various guidances exist to advise on the formation and operations of DMCs.6,7,8 Key issues include the relationship between sponsor and DMC, how the DMC activities should be documented, and how the sharing of interim results will be controlled to protect the integrity of the trial.
During the past 30 years, GSDs have evolved to become the standard design for many long-term trials,9,10 involving mortality or life-threatening conditions.
Finding the right dose for a promising drug is one of the most challenging scientific endeavors in clinical development. Carrying a sub-optimal dose into the late development stage reduces our chance for a successful confirmatory outcome. Additionally, the recommended doses of several drugs have had to be adjusted post launch due to safety problems.
In an adaptive dose-finding study, the trial sponsor could stop or start new doses at scheduled interim analysis times, minimizing patient exposure to harmful (too low or too high) doses. It is also possible to adaptively allocate more patients to some doses. In some adaptive dose-finding trials, the dose for the next patient is adaptively determined from prior data11 or chosen to help gather more information around a certain area of the dose-response curve.12
As an example, consider a Phase II dose-finding trial with two planned interim analyses, starting with five doses of drug X and an active control arm. At the first interim look, if the two lowest doses are judged to have suboptimal efficacy, they will be terminated. Patients in the next stage will be randomized to the remaining four arms. If the second interim analysis reveals an unacceptable safety signal with the highest dose, then the trial will continue with two doses and the control. The final analysis should incorporate the interim decisions and apply appropriate statistical methods. One could consider applying a model-based approach,13 recognizing that doses close to each other are likely to give more similar results than those far apart.
An alternative to the design above is to start with only the highest dose and the control. If the dose demonstrates sufficient effect at the first interim look, more doses will be added to help select the optimal dose based on risk–benefit considerations. In case the highest dose does not demonstrate adequate efficacy, this could be the basis for terminating the trial. This strategy merges proof of concept with dose-finding into one trial. When the effect on safety is mild and reversible, this strategy has some merit. On the other hand, if safety is a major concern, this strategy could be too risky and should generally be avoided.
A seamless design is a design that combines, into a single trial, objectives that are traditionally addressed in separate trials.14 An example is to combine dose selection (Phase II) and confirmation (Phase III) in one trial, removing the operational white space between the phases and therefore reducing the development time. Both stages could be combined in the final analysis using special statistical methodology to control the risk of making an erroneous efficacy claim. An example of an adaptive seamless design is given in Figure 1, where four doses of a Novartis drug, a placebo and two active controls were included in the trial. Two doses are to be chosen from the original four doses to enter into the confirmatory stage, along with the placebo and one of the two active controls.
A seamless Phase II/III design could yield more efficacy and dose information prior to triggering Phase III, as an increased investment in the Phase II stage may be better motivated if data from this stage could contribute to the final inference. By allowing a longer follow-up on patients enrolled in the first stage, more safety information from longer term exposure could be gathered.
The potential savings in time and cost have generated a lot of recent interest in seamless Phase II/III design. The seamless strategy, however, is not suitable for every development program. For example, drug formulation and toxicology work may need to be accelerated, and drug supply/packaging could be more challenging. If a biomarker is used to make dose selection, its predictive value should be well understood. The need for, and timing of, a second confirmatory trial must also be considered.
The greatest challenge of a seamless Phase II/III design relates to data review at the end of the dose selection stage. If the risk–benefit assessment for dose selection requires expertise not present in the DMC, the decision process could benefit from the sponsor's involvement. In this scenario, one could argue that the dose selection constitutes a learning stage, which should not be used to confirm the hypothesis under testing. The problem is more pronounced if many are involved in the dose selection decision, leading to a greater risk of information leak. Therefore, if there are many uncertainties about the product, a sponsor should consider a more traditional Phase II program to enable the learning.
At the moment, most of the Phase II/III designs are used to make a choice between 2-3 doses to be further tested in the trial. In this regard, the Phase II/III application is not all that different from a traditional Phase III trial with multiple doses, some of which will be dropped based on interim results.
Rob Hemmings, of the Medicines and Healthcare Products Regulatory Agency in UK, asked the question, "Do regulators like adaptive designs?" at a recent workshop.15 Hemmings made it clear that regulators were not adverse to adaptive designs as a matter of principle, but considered there to be risks. Early phase trials are typically conducted to aid the sponsor's internal decisions and these trials are subject to much less regulatory scrutiny. GSDs have been well accepted by regulators. Some other adaptations, such as blinded sample size re-estimation, have also been accepted and integrated into many development plans. There has been limited experience with submissions based on seamless Phase II/III trials. More experience will also be needed to determine whether some of the adaptive designs will indeed deliver on their promises.
Considerable debate remains on the sponsor's role in a DMC. Sponsor involvement should not be the norm for confirmatory trials. The sponsor should justify and document in detail any such involvement, and follow pre-specified rules. Central to the discussion are the concerns for information leak and the potentials for operational bias. In this regard, it is advisable to compare the estimated treatment effect before and after the adaptations for consistency in a secondary analysis.
The blurring of the notion of learning and confirming in some seamless Phase II/III trials is a source of concern for regulators. Although product development has traditionally been divided into Phase I, II, and III, it might make more sense to describe development stages by their primary objectives and move away from the Phase I/II/III designations. There are indications that the FDA is moving in this direction.15 Under this construct, data from the two stages will generally remain separate to serve the purpose of the respective stages without being combined to draw formal inferences.
In the last section, we mentioned the wide acceptance of blinded sample size re-estimation. The sample size of a trial has traditionally been calculated based on the assumed effect size, the variability, the desirable power and the type I error rate. Before conducting a trial, we are often uncertain about the variability and treatment effect size. These parameters, nevertheless, have a huge impact on the sample size (N). Adaptive sample size re-estimation is an option to adjust N based on interim estimates of effect size and/or variability. This adaptation is not controversial if only blinded interim data (not using information on who received what treatment) are used to estimate the variability. However, to get a reasonable interim estimate of the treatment effect, treatment assignment must be known to individuals who conduct the analysis. The latter is much more involved and has been the subject of much recent research and debate.16
There is a danger that the intense interest in adaptive designs may tempt trialists to use such designs without careful considerations. We want to remind our readers that the fundamental question to ask when selecting a design should always be "what is the most appropriate design to best address the question of importance in the most ethical, scientific, and efficient manner." Careful planning is key to the success of a traditional design. It is even more so for adaptive trials. We have found that careful consideration of an adaptive design often has the spin-off effect of better scenario planning by the team members. Over the coming years, it will be business imperative for the industry to industrialize the implementation of the most efficient designs. The latter may range from building in-house competence and enhancing scientific contacts with regulators, to increasing flexibility in budgeting systems and drug supply.
The design of clinical programs is a very serious business. Poor designs may not only be a waste of money, but patients may also be harmed in trials and the society might miss out on a great medicine. The industry could improve its ability to systematically utilize available information and expertise to choose the best design tailored for each drug project. As an example, the ADDAMS (Adaptive Designs, Decision Analysis, Modeling & Simulation) project within AstraZeneca outlines the following steps when designing a trial:
It is crucial to remember that no formal decision analysis can substitute good decision-makers. The analysis process is an aid to making decisions. Clear objectives and assumptions help team members communicate and work toward the common goals.
If we have more options when designing trials, and learn to carefully select from the available alternatives, we should be able to improve the development of new pharmaceutical products. The latter benefits patients, industry, and regulators. It is important that we proceed together in an atmosphere of collegial collaboration to generate and share experience of where adaptive designs may add value.17 Based on the collective experience so far, we believe that GSDs should be used more often and that the experiences from such trials could be useful when designing other types of adaptive designs. GSDs could on many occasions meet the same objective as designs with sample size re-estimation.18
Adaptive dose-finding, even though potentially challenging from the implementation perspective, holds much promise.13 We feel it would be the best at the present time to restrict seamless Phase II/III designs to specific situations and use them only after close interactions between sponsors and regulators. Finally, Phase III trials that select one out of two active doses based on pre-specified interim analyses, is an interesting option to meet the advice from regulators to take more than one dose into the confirmatory phase.
Christy Chuang-Stein, PhD, is Executive Director, Pfizer Inc., Carl-Fredrik Burman,* PhD, is Statistical Science Director, Technical & Scientific Development, AstraZeneca R&D, SE-431 83 Mölndal, Sweden, email: email@example.com
* To whom all correspondence should be addressed.
1. A. Grieve, "Adaptive Designs: A Fad or the Future of Clinical Research," PhRMA Workshop on Adaptive Designs: Opportunities, Challenges and Scope in Drug Development, www.innovation.org/index.cfm/NewsCenter/Briefings/Adaptive_Designs_Workshop (Washington, DC, November 2006).
2. P. Gallo, C. Chuang-Stein, V. Dragalin, B. Gaydos, M. Krams, and J. Pinheiro, "Adaptive designs in clinical drug development an executive summary of the PhRMA working group," Journal of Biopharmaceutical Statistics, 16 (3) 275-283 (2006).
3. M. Beltangady, "Phase III failures: What can be done?" Applied Clinical Trials, September 2005, 82-82.
4. G.L. Knatterud, "University Group Diabetes Program (UGDP)," Encyclopedia of Biostatistics (John Wiley & Sons, New York, 2005).
5. "Organization, review, and administration of cooperative studies (Greenberg report): a report from the Heart Special Project Committee to the National Advisory Heart Council, May 1967," Controlled Clinical Trials, 9 (2) 137-148 (1988).
6. S.S. Ellenberg, T.R. Fleming, and D.L. DeMets, Data Monitoring Committees in Clinical Trials: A Practical Perspective (John Wiley & Sons, Chichester, 2002).
7. Food and Drug Administration, Guidance for clinical trials sponsors: Establishment and operation of clinical trial data monitoring committees, www.fda.gov/cber/gdlns/clintrialdmc.htm (FDA, Rockville, MD, 2006).
8. European Medicines Agency, Guidelines on data monitoring committees, (EMEA, London, 2005), http://www.emea.europa.eu/pdfs/human/ewp/587203en.pdf (accessed February 2008).
9. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators, "Preliminary report: effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction," New England Journal of Medicine, 321 (6) 406-412 (1989).
10. J. E. Manson, "Estrogen plus progestin and the risk of coronary heart disease," New England Journal of Medicine, 349 (6) 523-534 (2003).
11. S. Chevret, Statistical Methods for Dose-Finding Experiments, (John Wiley & Sons, Chichester, 2006).
12. M. Krams et al., "Acute stroke therapy by inhibition of neutrophils (ASTIN): an adaptive dose-response study of UK-279,276 in acute ischemic stroke," Stroke, 34 (11) 2543–2548 (2003).
13. B. Bornkamp et al., "Innovative approaches for designing and analyzing adaptive dose-ranging Trials," Journal of Biopharmaceutical Statistics, 17 (6) 965-995 (2007).
14. J. Maca, S. Bhattacharya, V. Dragalin, P. Gallo, and M. Krams, "Adaptive seamless phase II/III designs background, operational aspects, and examples," Drug Information Journal, 40 (4) 463-473 (2006).
15. EMEA/EFPIA Workshop on Adaptive Designs in Confirmatory Clinical Trials, (London, December 2007).
16. C. Chuang-Stein, K. Anderson, P. Gallo, and S. Collins, "Sample size re-estimation: A review and recommendations," Drug Information Journal, 40 (4) 475-484 (2006).
17. G. Mills, "Eyes wide open," Applied Clinical Trials, September 2007, 16.
18. C. Jennison, B.W. Turnbull, "Adaptive and nonadaptive group sequential tests," Biometrika, 93 (1) 1-21 (2006).
19. F. Bretz, D. Lawrence, P. Thomas, EMEA/EFPIA workshop on Adaptive Designs in Confirmatory Clinical Trials, http://www.emea.europa.eu/pdfs/conferenceflyers/adaptive_designs/bretz.pdf (December 2007).
Related Content:Trial Design