The Adaptive Advantage

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-06-01-2008
Volume 0
Issue 0

With help from technology, adaptive trials can enhance dose selection and reduce time between phases.

The arguments for increased adoption of adaptive clinical trial techniques are compelling, particularly in the role they can play in cutting development costs and reducing time to market. Cutting Phase III development by just one month can save around $6 million based on average industry costs,1 and for a blockbuster product could increase revenues by as much as $90 million. Put another way, $3 million is at risk for every extra day a blockbuster spends in development.

PHOTOGRAPHY: GETTY IMAGES ILLUSTRATION: ISABELLA BLUMBERG

At ClinPhone, we see a third strong argument emerging: the need to arrive at optimal dosage as quickly as possible. Taking a wrong dose into Phase III can result in having to repeat studies using different doses or even terminating development incorrectly—each course carrying enormous cost and time penalties.

There is also a risk of bringing drugs to market with incorrect doses. In fact, it is estimated that one in five marketed drugs is launched with a flawed dosage.2 Most are set too high, and in today's environment of more efficient postmarketing surveillance it is not surprising that dose reductions are on the rise. Despite ethical considerations of marketing a suboptimal dose, reducing the dose postlaunch can have major implications for pricing and reimbursement.

Adaptive trial design promises to provide some significant solutions to all three requirements: a means of escalating drug development, cutting costs, and arriving at an optimal dose.

Flexible concepts

Historically, clinical trials comprised three steps: planning, execution, and analysis. Very little could be done in response to interim analysis, and it was not uncommon to have to design a whole new study in response to trial findings. More recent industry thinking, however, suggests that more flexible adaptive regimes may safely allow design modifications based on interim analyses.

The concept has been defined by the PhRMA Working Group on Adaptive Trial Designs as "a multistage study that uses accumulating data to decide how to modify aspects of the study without undermining the validity and integrity of the trial."3 Recent European guidance4 is supportive of this approach. And in a 2006 speech, Dr. Scott Gottlieb, former deputy commissioner for medical and scientific affairs at FDA, said, "It's essential that we at the FDA do all we can to facilitate their [adaptive trials] appropriate use in modern drug development."5 FDA guidance is in preparation and eagerly awaited.

Early work included two-stage trials and sequential alterations to the sample size. More recently, the concept has expanded and possible adaptive designs now include dropping or adding treatment arms (or doses) and changes to the treatment allocation ratio or treatment assignment probabilities.

There are two broad approaches to adaptive trials, and it is useful to look at these with reference to their application to a Phase II dose finding study.

The first approach uses occasional discrete interim analyses in conjunction with decision rules defined at the outset in the study protocol to implement modifications midstudy (see Figure 1). Interim analysis milestones are commonly defined at key points in study progress (e.g., once endpoint data have been collected for a predefined proportion of the overall sample size). Statistical analyses performed by an unblinded statistician and presented to a data monitoring committee form the basis of decision making at each interim cut of the data. This may include, as mentioned, dropping treatment arms or changing the treatment allocation ratio for future allocations.

Figure 1. Treatment arms can be terminated at scheduled interim analyses by applying decision rules defined in the study protocol.

The second approach, "response adaptive," does not use fixed interim analyses but bases future randomizations upon the current responses of subjects observed, sometimes changing the randomization rules on a patient-by-patient basis. These designs utilize Bayesian statistics and incorporate a model of the dose–response relationship to strengthen the estimation of treatment effects associated with each specific dose level. Although the theory is not new, this methodology has only recently been adopted, as more powerful computers have become available. In such a trial, a black-box algorithm containing the Bayesian model is fed the current response data and returns the future treatment allocation rules to favor the optimal dose and limit exposure to suboptimal doses. The relative merits of both approaches for dose finding studies are discussed in Bornkamp et al.6

Real-time access

Adaptive designs rely on accumulating subject response and safety data as the basis for changes to the randomization algorithm. This clearly presupposes real-time access to accumulating study data.

Early examples of adaptive trials utilized paper-based case report forms (CRFs) transmitted by fax and read by Optical Character Recognition (OCR) technology to provide fast access to key endpoint data. This approach was used, for example, in Pfizer's ASTIN trial.7 There are, however, many practical issues associated with getting the data from scheduled monitoring visits to a central management area, checking the data before analysis, and making necessary changes. More realistic approaches involve electronic capture of clinical response data at the site and transmission to a central analysis database. Electronic Data Capture (EDC) systems enable clinical assessment data normally collected on paper CRFs to be collected via Web forms (eCRFs) and transmitted over the Internet.

The use of EDC in clinical trials has become widespread because of its clear advantages: providing data checks against predefined rules, data clarification capability at the time of entry, data cleaning possible throughout the study, and all information in one place from the outset. Improved data quality and speed of data capture are compelling advantages over paper-based solutions. The ability to capture accumulating data in real time is an added argument for the adoption of EDC within adaptive studies.

Thought does need to be given to the configuration and implementation of an EDC solution with large volumes of forms at each visit, yet few key response data points. To ensure fast access to vital data, it should be made simple for sites to navigate to and enter the critical response data required by the adaptive trial algorithms without having to complete other data associated with a specific visit. Critical response data, for example, can be contained in separate CRFs, which can be navigated to independently of the entire patient data. Reports and automated reminders may facilitate the rapid entry of these data.

In some studies, the key response endpoint is patient-reported. Traditionally these data were collected using paper diaries and collected at scheduled clinic visits. With data quality and speed of access a major concern, for an adaptive trial electronic patient reported outcome (ePRO) solutions are strongly recommended, as these incorporate data quality checks and provide real-time access to data outside clinic visits. Solutions include eDiaries, IVR, and personal digital assistants. The recent adaptive study in neuropathic pain conducted by Pfizer, for example, used an IVR system to provide real-time access to patient-reported response data.8

Where EDC or an ePRO solution is not used, speed of access to response data can be achieved simply by utilizing the Interactive Voice Response (IVR)/Interactive Web Response (IWR) system in place for randomization and trial supply management. Response data and some degree of safety data could be captured using additional questions in the subject withdrawal or completion call. Simplified IWR Web pages, designed to collect subject response data, may provide a practical means of capturing more complex data quickly.

Getting it right

How can randomization be performed effectively where treatment arms are dropped or allocation ratios adjusted? Such new levels of flexibility and real-time decision-making raise important logistical issues in terms of randomization.

Site-administered randomization effectively becomes impossible once changes to randomization take place in-flight. Consider the administrative practicalities of keeping sites fully informed whenever a treatment arm is dropped or when moving to a different allocation ratio. The process becomes error prone, as well as wasteful of drugs—and European regulators indicate that site knowledge of design adaptations is undesirable. In an adaptive environment, central randomization, typically supported by IVR/IWR technology, effectively becomes a must-have.

So, what options are open to us? How can we hope to deliver centralized randomization in a nondisruptive way? Essentially we have two options: the use of pre-prepared code lists or using a random number generator to assign treatments dynamically in accordance with current allocation probabilities. In a trial with limited possibilities for different allocation ratios, it may be possible to create code lists up front, simply switching between them as changes occur (see Figure 2).

Figure 2. Defining code lists in advance may be useful when the treatment assignment has little fluctuation and enables switching between lists in real time.

In this example, there are two active dose levels (A and B) and a comparator or placebo (C). Initially, subjects are randomized in equal proportions to each treatment group. Based upon the results of an interim analysis, one active dose level will be dropped and the allocation ratio changed to 3:2 active:comparator/placebo. This switch occurs in real time, following a single database change.

There will be occasions when it is impractical to define lists in advance. Nevertheless, it may still be possible to create blocks of code as the study progresses. Questions then arise: Will Sponsors require a QC check of the new code list or will the validated computer program be sufficient to facilitate real-time switching to newly created lists? Should blocks be created manually or can the system do it automatically? At ClinPhone, our advice is to automate this process as much as possible, thereby creating minimal disruption to the ongoing study.

Where treatment assignment is constantly changing, it may not be practical to use code lists. For example, in a response adaptive study the treatment assignment probabilities may change on a regular basis. This would be impossible to accommodate with predefined code lists as there are endless possibilities. In this situation, random numbers can be used to determine the next treatment allocation in accordance with the assignment probabilities, which is known as dynamic allocation (see Figure 3). Assignment probabilities can be updated immediately prior to the randomization of each new subject to account for any new response information received since the previous randomization event, thus not disrupting the ongoing operation of the study. An example of such a trial where allocation probabilities changed after every subject was recently published.9

Figure 3. When the treatment assignment is constantly changing, it is wise to use dynamic allocation.

Whichever approach is adopted, lists, parameters, and probabilities may be allowed to change at any time while the underlying randomization algorithm remains unchanged: a very efficient process. The key message is that studies vary and the choice between manual or fully automated processes is driven by protocol requirements.

Supplies and demand

Planning trial supply requirements can be a challenge for conventional study designs. It becomes particularly challenging in an adaptive environment. IVR/IWR solutions are commonly used to simplify trial supply management by automating the restocking of site and depot inventories throughout a clinical trial. These solutions have a critical role to play in adaptive studies. Nevertheless, the questions a sponsor needs to ask can be boiled down to two distinct horizons.

The first, each time the randomization scheme is changed, looks at an immediate time horizon: If I make the design adaptation, do I have the correct drugs in place right now? Does every site have enough stock on the shelves to accommodate existing and newly randomized patients? If not, then urgent action will be needed. The second question looks further ahead: With the current and further potential changes to the randomization scheme, will I have enough of the drug to complete the study? Taking a three to six month horizon, will there be sufficient stock at the depots to serve the sites within each country? Do I need to begin a new packaging campaign?

Sponsors may also wish to take the opportunity of asking whether they can be more creative in the way they package medication. Rather than employing a different pack type for each dose, it may be possible to adopt combinations of pack sizes to reduce the total quantity of medication the study requires and to limit waste whenever doses are dropped. They may also choose to introduce many more, smaller packaging campaigns over time, manufacturing only what is needed whenever elements get dropped from studies.

When estimating requirements for clinical trial supplies, most companies still perform a simple Excel spreadsheet exercise, extrapolating the number of patients and number of sites to arrive at the quantity of packs required, typically with an unnecessarily generous overage factor. We would advocate the use of more sophisticated supply chain simulation techniques. Simulation can accommodate real-life variability and provide estimates and ranges for the required quantity of each dose level of medication. Simulation also facilitates the use of "what if?" techniques, examining factors such as speed of recruitment, withdrawal rates or future design adaptations.

As for the most appropriate technology necessary to support supply management, it is easy to see why IVR technology has quickly established a pivotal role in adaptive trials. It is perfectly positioned to support changing supply strategies with its real-time access to sites and patients, as well as its ability to exchange information with EDC technology, centralized planning, and simulation data. An IVR approach, employing uniquely numbered dispensing units, allows for site and depot stock strategies to be modified easily, reflecting the current randomization algorithm. It also provides an opportunity to implement design changes without site knowledge, to maintain low stock levels at sites (which are replenished by regular resupplies), and to prevent investigators from drawing conclusions related to sudden surges in supply.10

Conclusion

By dropping treatment arms and/or adjusting allocation ratios, adaptive trials promise a better chance of identifying the optimal dose during Phase II by facilitating the investigation of more doses without substantially increasing sample size. They also provide a means of accelerating drug development by opening the possibilities of seamless transition from Phase II dose-finding into Phase III.

Successful implementation of adaptive trials relies heavily on the use of disparate technology, sited remotely, under multiple ownership. Collection of subject response/safety data may feature a mix of EDC, ePRO, IVR, and IWR technology. To conduct a seamless adaptive trial clearly requires seamless integration of hardware and software solutions. Figure 4 illustrates how different technologies may be brought together.

Figure 4. An illustration of how different technologies such as EDC, ePRO, IVR/IWR can be brought together in an adaptive clinical trial to ensure seamless integration.

Our own vision is summed up perfectly by a quote from the PhRMA Group: "It is not too difficult to imagine a future in which systems are emerging that integrate data capture, monitor recruitment, monitor and trigger dispensing of drug supplies, and disseminate trial data to in-built decision-making modules facilitating preplanned adaptations of certain aspects of the study design. Under such systems, changes to say dosage would be triggered in a manner that is not only seamless, but also virtually invisible to the sponsor, patient, and investigator."3

Graham Nicholls,* MSc, is product manager, randomization and supply chain management, Damian McEntegart, MSc, FIS, is head of statistics and product support, and Bill Byrom, PhD, is vice president, product strategy and marketing, at ClinPhone PLC, Lady Bay House, Meadow Grove, Nottingham, NG2 3HF, UK, email: gnicholl@clinphone.com

*To whom all correspondence should be addressed.

References

1. Tufts Center for the Study of Drug Development Quantifies Savings from Boosting New Drug R&D Efficiency, September 2002, csdd.tufts.edu/NewsEvents/RecentNews.asp?newsid=20.

2. S. Frantz, "One in Five Drug Doses Set at Inappropriate Levels," Drug Discovery@Nature.com, September 2002, http://www.nature.com/drugdisc/news/articles/d060902-1.html.

3. PhRMA Working Group on Adaptive Designs, White Paper, Drug Information Journal, 40, 421-484 (2006).

4. Committee for Medicinal Products for Human Use, Reflection Paper on Methodological Issues in Confirmatory Clinical Trials with Flexible Design and Analysis Plan, March 2006, http://www.emea.eu.int/pdfs/human/ewp/245902en.pdf.

5. S. Gottlieb, Speech Before 2006 Conference on Adaptive Trial Design, Washington, DC, July 2006. http://www.fda.gov/oc/speeches/2006/trialdesign0710.html.

6. B. Bornkamp et al., "Innovative Approaches for Designing and Analyzing Adaptive Dose-Ranging Trials," Journal of Biopharmaceutical Statistics, 17, 965-995 (2007).

7. M. Krams et al., "Acute Stroke Therapy by Inhibition of Neutrophils (ASTIN): An Adaptive Dose Response Study of UK-279,276 in Acute Ischemic Stroke," Stroke, 34, 2543-2548 (2003).

8. M.K. Smith et al., "Implementation of a Bayesian Adaptive Design in a Proof of Concept Study," Pharmaceutical Statistics, 5, 35-50 (2006).

9. R.G. Maki et al., "Randomized Phase II Study of Gemcitabine and Docetaxel Compared With Gemcitabine Alone in Patients With Metastatic Soft Tissue Sarcomas," Journal of Clinical Oncology, 25, 2755-2763 (2007).

10. D. McEntegart et al., "Blinded by Science with Adaptive Designs," Applied Clinical Trials, March 2007, 56-64.

Related Content
© 2024 MJH Life Sciences

All rights reserved.