An Assessment of Adaptive


Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-09-01-2009
Volume 0
Issue 0

Understanding and using adaptive trial design to achieve the most of its available advantages.

To ensure commonality of understanding, establishing what is meant by "adaptive trial" is warranted. A working definition of an adaptive trial is as follows: An adaptive trial is one that uses information obtained within the study to modify, or potentially modify, the structure or other design feature of the study.


From the perspective of statistical analysis, an adaptive design can affect alpha (require some adjustment for multiplicity) or can preserve alpha without adjustment for multiplicity, where alpha is the probability of a type I error—which most commonly is the probability of declaring that a drug is effective when it is not effective. In early phase development studies where alpha has no important role, adjustment to alpha is not essential because it is not relevant.

It is noted that when a study includes a provision to make a modification (e.g., an interim analysis with a reassessment of the sample size), the study is considered to be an adaptive trial whether or not a change was made. Also, when a study does not include a provision to make a modification but a modification is made through the amendment process, the study is considered to be an adaptive trial (e.g., if it is determined that the sample size will not support the study's achievement of its goals and a decision is made to modify the sample size). When information within the study is used to make a modification or potentially to make a modification, then the study is an adaptive study.

Apples and oranges

By contrast, in a traditional clinical trial a protocol is developed and executed with no modifications—ideally. Amendments are tolerated, but when there are too many a stigma is attached to the study. Introducing flexibility in a clinical study is contrary to the well-planned, well-executed nature of the traditional study.

One driver of the traditional plan-your-work and work-your-plan approach to research is to protect alpha—the probability of a type I error, which, in the context of a placebo controlled superiority study is the probability of concluding that a drug works when it doesn't. The rigid nature of the traditional trial protects the statistical analysis from inflated alpha for efficacy evaluation.

Another driver of the traditional approach to planning the study and executing it without modification is the protection of the study against sources of bias. Knowledge of results of the study can influence the accruing results, thereby damaging the study's integrity. A traditional study often mitigates this kind of bias by using an independent Data Monitoring Committee for reviews of unblinded data or accumulating results.

This source of bias is an issue for an adaptive study because the protocol includes provisions to alter the study structure or other design feature. The risk of this bias should be evaluated in the context of the overall goal of the study and its role in the drug development plan; if this bias is not important in context, then the risk is low. If the study is intended to be used as a source of pivotal evidence of efficacy, this bias would be important.

Regulatory considerations

Although adaptive trials are not new, there is a new enthusiasm for the flexible methodology, and the recent popularity is in part related to encouragement from the regulatory arena for creative pathways to shorten timelines to bring new therapies to market. Regulatory agencies have communicated that, under certain conditions, flexibility in a trial can be acceptable.1,2,3

EMEA issued a document, "Reflection Paper on Methodological Issues in Confirmatory Clinical Trials Planned with an Adaptive Design,"4 (See Table 1) and FDA has as a published PDUFA goal the release of a draft guidance shortly. It should be emphasized that encouragement from regulatory agencies to make use of adaptive trials is accompanied by caution against inappropriate use.

Table 1. Synopsis of the EMEA views on adaptive design based on a document issued in 2007.

Regulatory risk depends on the regulatory role of the study. If the study is not used as a basis of approval, regulatory risk is minimal. There must be adequate basis for dose selection, but demonstrating statistically significant differences between the selected dose and a de-selected dose is not required.

If the alpha associated with the analysis of differences between doses is inflated, it is of little or no concern to regulatory agencies if the evaluation of efficacy does not rely on that study for key support. Confirmatory studies need to be uncompromised. For basis of approval, the issue still goes to weight of evidence. If remarkable efficacy is found, some compromise can be tolerated. If the study conclusion is less robust, compromise becomes more of an issue.

Scientific risks

The regulatory risks are truly scientific risks as well. There are, however, other scientific risks that would not be of particular concern to regulatory authorities. If the study does not have an important regulatory role, then the regulatory agency would not be concerned with the compromise, hence the sponsor bears the scientific risk.

In an adaptive study, presumably in early development, when data review occurs as data is accrued, it is tempting for sponsors to make decisions based on small amounts of data. While having a small amount of data is better than having no data, there is still risk of over-interpreting the trends (favorable or unfavorable) and making wrong decisions. Including statistical assessment as a routine part of the clinical and medical evaluation can help to mitigate this scientific risk.

When a later stage study is planned as adaptive, scientific compromises are introduced when the personnel involved in the ongoing study receive information about accumulating results about the study.

In any study with compromise, whether traditional or adaptive, the evidence of efficacy must be sufficiently substantial to overcome obstacles of potential bias or inflated alpha.


Benefits of an adaptive design are clear: potentially shorter timelines, fewer subjects exposed to de-selected doses or treatments, fewer subjects in a study, and less cost. The adaptive design permits a study to adjust to the accumulating body of knowledge and data pool to guide continued clinical research. [The most common types of adaptive studies include modifications of the type listed in Table 2.]

Table 2. A description of the types of adaptive design modifications and adjustments to alpha needed.

An adaptive trial that includes more than one study under the same protocol permits a small (or pilot) study to be done followed immediately by another study without the delays and resource investment of study start-up. Pilot studies are useful in situations where data are not available or are too limited to support the required decision making.

It should be noted that an adaptive design is not a remedy for poor planning. In the absence of good data to plan a study, an adaptive study can include doing a pilot study within the larger study. A scenario could be that an initial dose is selected for evaluation, data are collected, analyzed, evaluated, and the next step in the program is made on the basis of real data as opposed to relying solely on animal data and guesswork.

In the context of an adaptive study, the pilot study is just the first segment of the study, and the following segment, which is actually a new study, is administratively conducted within the same protocol—same IRB, same sites, same visit schedule, same administrative staff—and enormous efficiencies are gained.

An appropriate risk assessment for the evaluation of the appropriateness of permitting flexibility in a study requires the context of the role of the study and the role of alpha. In a Phase I study, alpha is not generally very important.

If resources only afford three treatment arms in a study, and the appropriate dose is known to be one of five arms, an adaptive design can offer a solution. Risk is increased if the wrong doses are evaluated in a study.

Generally, in early phase drug development adaptive studies carry little risk. The regulatory risk is low in early phase development because these studies are not used as a basis of pivotal evidence of efficacy. Scientific risk can be mitigated by adhering to sound scientific principles.

In late phase drug development, adaptive studies can be extremely beneficial or risky. Phase II/III studies can avoid "enrollment holidays." The scientific principles, which are also discussed by regulatory agencies, must be protected; compromises to scientific principles carry risk. In late-phase studies, regulatory acceptability is a key consideration. Communication with the appropriate regulatory agencies is strongly advised to ensure acceptability of the study by the agencies.

Appropriate and complete risk assessment includes balancing the potential benefits of an adaptive study against the risks associated with the adaptive nature of the study. All studies have some risk, however, and the risks of doing the traditional study must also be considered when making the decision for the study being planned to be traditional or adaptive.

If the assumptions and available data underlying the design of the traditional study are tenuous, then those risks may be greater for the program than those associated with the study being adaptive. In planning the study, inserting safeguards to protect the integrity of portions of the study that may be suitable for registration can allow the sponsor to have the benefits of an adaptive study while simultaneously protecting the integrity of the study intended to provide pivotal evidence of efficacy.

These safeguards relate to shaping the protocol to include an operational adaptive plan that includes a pivotal study within the adaptive study. The pivotal portion would have a separate randomization, and the statistical analysis plan would treat the data from the pivotal portion of the study as if it is a stand-alone study.

Another risk that warrants a notation in the planning of an adaptive study relates to the operational infrastructure. When a decision is made to do an adaptive study, the operational aspects of execution of the adaptive study need to be able to support the decision. The standard operating procedures (SOPs), templates for key study documents, and personnel training need to include the flexibility. Additional documents, or additional major sections in existing documents, may be necessary to ensure seamless operations.

For example, when a study arm is discontinued, reclaiming the clinical trial material intended for use by that study arm needs a process. If the organization makes a decision to execute an adaptive study but does not have the operational details addressed, there can be risk to the study.


In early phase development, adaptive trials offer beneficial flexibility with little or no regulatory risk. In confirmatory trials that are used as a basis of pivotal evidence of efficacy, modifications of an ongoing study can compromise the study integrity and should be used with caution. In an adaptive study that includes pivotal evidence of efficacy, there should be randomization that begins the pivotal portion of the study, and there should be no modification or compromise to the study after the first subject has been randomized into that part of the study, and that part of the study should be able to stand alone.

Imogene Grimes, PhD, is the vice president of data sciences strategic services, clinical research services, and Barbara Tardiff,* MD, MS, MPhil, MBA, is corporate vice president of data sciences, clinical research services, at PAREXEL International, email: 200 West Street, Waltham, MA 02451.

*To whom all correspondence should be addressed.


1. S. Gottlieb, Speech at Conference on Adaptive Trial Design, Washington, DC, (2006).

2. Hung et al., "Adaptive Statistical Inference Following Sample Size Modification Based on Interim Review of Effect Size," presented at ASA Joint Meetings (2002).

3. J. Woodcock, "FDA's Critical Path Initiative —Progress to Date and Direction," presented at the Science Board to the FDA, Gaithersburg Hilton, Washington, DC, December 3, 2007.

4. EMEA, "Reflection Paper on Methodological Issues in Confirmatory Clinical Trials Planned with an Adaptive Design," at (October 18, 2007).

Related Content
© 2024 MJH Life Sciences

All rights reserved.