Top Three Misconceptions of Adaptive Trials

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-05-01-2008
Volume 0
Issue 0

A clear grasp of the methodology behind the design is as important as the software that runs it.

Adaptive design, sometimes referred to as flexible design,1 is more than another new statistical methodology, tool or formula in the hands of biostatisticians. It's a new technology that changes the entire clinical research process.

Photography: Marcus Clackson, Getty Images

Midstudy adaptations and flexibility are new concepts not only for statisticians but also for trial managers, data scientists, and regulatory experts. Adaptive designs challenge medical advisory groups, data monitoring committees (DMCs), drug safety boards, ethics committees, and regulatory agencies. These designs have an impact on drug development strategies, trial protocols, informed consent forms, trial amendments, data analysis, and reporting plans. Adaptive design will also change the study drug supply chain; the patient enrollment, randomization, and data capturing process; monitoring; and data cleaning systems. In other words, adaptive designs represent a totally new technology in the field of clinical research and development.

As with every new technology, its far reaching consequences are ignored or underestimated in the beginning. In fact, when it comes to adaptive design, there is ongoing debate about statistical optimality, practical feasibility,2 and regulatory acceptance.3

So, what are the major benefits? Above all, the ethical benefits of adaptive designs are the ones that count the most. The new technology will provide enormous advantages to individuals who participate in clinical trials by reducing their exposure to ineffective or unsafe treatments. Society benefits because this new technology helps bring better drugs to market faster. The bottom line is that adaptive designs will prevail because they provide primarily clear ethical benefits and not just because they bear the potential for cost savings.

A New 20-Year-Old Idea Defined

Common misconceptions

A prerequisite for successful implementation of adaptive designs is an understanding of the underlying methodology. What follows are three of the most popular misconceptions around adaptive designs.

Misconception one: There are certain areas of confirmatory clinical research where adaptive designs are more applicable and other areas where adaptive designs are less or not applicable. Virtually every clinical trial can fail. There are two main reasons for a negative trial: The compound tested did not work, or the outcome of the trial was inconclusive. The latter has to do with statistics—and with ethics.

Traditional statistical designs are based on assumptions of prospective effects. Should these be imprecise (and they normally are), the trial's power to provide the desired proof of evidence is reduced and, consequently, patients might be needlessly exposed to a harmful treatment. The ethical aspect here is that the trial could have been prevented from being negative had the statistical design considered the implicit uncertainty of the planning assumptions. This is where adaptive designs come into play and is also the reason why ethical research programs should always comprise an element of adaptation.

To the extent that virtually every confirmatory trial is based on uncertain effects, every trial design should comprise an element of adaptation—at least a check of the planning assumptions in order to not waste the entirety of the data. Accordingly, adaptive designs are applicable in all kinds of confirmatory research. Once the technical complexities of implementing an adaptive trial are resolved, the disadvantages (i.e., increased workload and more sophisticated statistics) will be outweighed by their advantage: improving the success rate of trials.

Misconception two: Adaptive trial designs are characterized by unmanageable complexity and less careful planning. There is an apprehension—not completely unjustified—that a normal adaptive trial might present itself as a seamless, dose-finding, effect-confirming, combined Bayesian/group-sequential study that allows for treatment arm selection, response-adaptive patient allocation, sample size recalculation, and an adaptive choice of endpoints and hypotheses. Do not be afraid or confused by these high expectations, excited debates or the hype surrounding adaptive designs. The standard adaptive trial is a Phase II, III or IV trial with a small number of preplanned interim analyses that focus on either moderate sample size adjustments to rescue a potentially underpowered study or smarter treatment arm selection.

Let's say that today's adaptive designs save 10% of all clinical trials that otherwise would have failed. This represents a great deal of cost and time savings, as well as more efficient resource utilization. As the methodology matures, the industry will probably see more complex designs in the future, but only at the pace that the systems in clinical operations improve and adapt to the new technology. However, and most notably, the ideal adaptive trial is a trial without any adaptations and with as few interim analyses as possible.

Adaptive designs are characterized by thorough preplanning and appropriate use of statistical methods to optimally address the research question. They are also characterized by careful consideration of the principles of good statistical practice in order to maintain the trial's validity and integrity, as well as by exact type I error control. For example, take a look at an adaptive trial protocol—it is quite sizeable. It comprises the effect size assumptions; simulations; justification of stopping rules; number and timing of interim looks; extensive elaboration on type I error control and statistical bias control; description, motivation, and impact of potential design adaptations; and justification of methods to preserve the blind.

Above all, adaptive trials cannot be conducted without permanent statistical monitoring. Usually there are quite a few statisticians involved: the blinded project statistician, un-blinded statistician, and independent external statistician. In summary, one can say that adaptive designs require even more preplanning. They are definitely not a "remedy for poor planning."2 By nature, adaptive designs are subject to more intensive external control. In addition, adaptive designs result in an increased demand for qualified statisticians.

Misconception three: Adaptive designs require smaller sample sizes than traditional designs. No, usually they do not. The reason is that human beings lean toward wishful thinking. On average, drug effects are overestimated and the variability of drug effects is underestimated. As yet, trials have either been unknowingly underpowered or intentionally overpowered. In the latter case, an adaptive design is more or less unnecessary (aside from the questionable ethics of overpowered trials). The unknowingly underpowered trial, however, is where adaptive designs come into play. By design, they try to maintain the power at a prespecified level; hence, on average they result in a larger sample size compared to the traditional design.

However, in the long run, this effect is outweighed by another effect of the adaptive design: If a potentially underpowered trial is rescued, the entire sample is not lostit contributes to confirmatory evidence and does not need to be collected again. This is a patient-sparing effect. Overall, adaptive designs make better use of the patient as a resource. Trials no longer need to be overpowered, and the number of underpowered trials is reduced. So, from a different perspective, is it true that adaptive designs require smaller sample sizes than traditional trial designs? Yes, in the long run they do.

Understanding the mechanics

A profound understanding of the statistical methodology is the cornerstone of a successful implementation process. At first sight, it sounds so simple. Instead of combining patient data, adaptivists combine p-values from separate stages (i.e., interim analyses). Astonishingly, however, methodological expertise has not spread very far since the idea was first published.4 The group of statisticians around the globe who publish most of the statistical work is still relatively small, and professional training of industry and CRO statisticians is still in its infancy.

One of the intriguing features of adaptive trials is that they offer a countless number of design options. For example, adaptive group-sequential designs involve a predetermined number of interim analyses, whereas in recursive combination test designs, the number and timing of interim analyses is usually not preplanned—just like other design features (e.g., effect assumptions, stopping rules). The differences between these designs do have an impact on the overall probability of success. They also have an impact on the logistics of the trial, such as data management and monitoring procedures, EDC/IVRS, drug supply management, and DMC operating procedures. The different "behaviors" of the designs need to be well understood and thought through. What this really means is that the design determines the practical performance of the trial.

Beyond the computerized systems needed for the implementation of adaptive designs, the importance of statistical software cannot be overemphasized. For regulators, it must be a nightmare to check statistical results from a poorly described adaptive trial to find out whether the p-values and confidence intervals provided by the applicant are correct or not. Similarly, it is a nightmare for the applicant should the statistical conclusions not be accepted by the agency—and the burden of proof always falls on the applicant.

The availability of statistically sound, validated software that mimics the adaptive course of the trial while preserving the type I error rate is crucial, as adaptive designs are far more complex than traditional, nonflexible designs. A proper software package spans a wide range of designs from clinical trial Phases I to IV and integrates tools for:

  • Trial design

  • Trial simulation

  • Interactive analysis and adaptation.

With adaptive designs, the era of handmade statistics is over. With traditional designs, an experienced statistician was able to do most calculations (e.g., planned sample size) without a software package. By contrast, this is no longer feasible with adaptive designs. Also, the analysis, previously done almost entirely with SAS, requires a new type of software that considers the various multiplicity issues and provides bias correction tools. For instance, a p-value is no longer a simple p-value in the traditional sense, and the calculation of confidence intervals becomes more sophisticated. Moreover, the adaptive software steers the process of adapting the trial and correctly addresses the impact of an adaptation on the statistical inference.

As trials become adaptive, their statistics become dynamic. Formerly, p-values were calculated on static data at the end of the trial. An adaptive p-value considers the dynamic character of the data, namely when and how often the data have been unblinded, at what level of inference, based on what kinds of adaptations. Adaptivists should consider three basic rules:

  • Use only validated software.

  • Use only designs that can be simulated and analyzed with your software.

  • Use only inferential statistics that were calculated with your software.

Are you prepared?

All major pharmaceutical companies are preparing themselves for big changes in the clinical research and development environment. Their ultimate goal is to reduce failure rates. Adaptive designs provide the means to achieve this goal. But the industry's usage of adaptive designs provides the real challenge—a technological challenge. The technological hurdles cannot be overcome without the help of clinical technology providers that can facilitate access to trial data and provide the systems to manage adaptive trial data appropriately. These include:

  • Electronic systems that give real-time access to clinical data that manage resources and allocate patients in a way that fulfills the requirements of adaptive designs

  • Software that generates accurate, valid statistical inference and steers the process of design, simulation, and adaptation.

Therefore, technology is likely to play the most important role in addressing the issues that the new methodology has raised. And technology is likely to change the traditional role of pharmaceutical service providers as well. Services companies are about to build new service offerings for a changing clinical trials landscape. The focus will shift from ground force to technology, and the industry will likely see more strategic partnerships between sponsors and service companies.

Reinhard Eisebitt is managing director and Joachim Vollmar is vice president, North America, at UBC-ClinResearch. Michael Borkowski* is general manager, clinical technologies, for United BioSource Corporation, 55 Francisco Street, Suite 780, San Francisco, CA, 94133, email: michael.borkowski@unitedbiosource.com

*To whom all correspondence should be addressed.

References

1. J.A. Quinlan and M. Krams, "Implementing Adaptive Designs: Logistical and Operational Considerations," Drug Information Journal, 40, 437-444 (2006).

2. O.T. O'Neill, "FDA's Critical Path Initiative," Biometrical Journal, 48, 559-564 (2006).

3. P. Bauer, "Multistage Testing with Adaptive Designs," Biometrie Informatik in Medizin und Biologie, 20, 130-148 (1989).

4. V. Dragalin, "Adaptive Designs: Terminology and Classification," Drug Information Journal, 40, 425-435 (2006).

© 2024 MJH Life Sciences

All rights reserved.