OR WAIT 15 SECS
A strong data management backbone that includes the use of EDC can make flexible designs an operational reality.
Pharmaceutical and biotechnology Phase III programs are critical to both future revenue and growth, and to successfully bringing novel therapies to market. Their chances for success, however, can often be compared to the outcome of a coin toss. Of course, ineffective or unsafe compounds must be removed from the development pipeline—but programs that fail due to incorrect dosage choices or underpowered studies should similarly be avoided. Unfortunately, current methodologies for planning and conducting these programs make improving the odds of successfully reaching the market difficult.
The challenge lies in efficiently managing issues associated with underpowered trials, altering therapeutic treatments within the course of a single trial, and applying dynamism to the context of an entire development portfolio. These ideas can be simply described as a shift from retrospective to prospective decision-making. With support from the regulatory community and widening statistical acceptance of adaptive trial techniques, the success of a Phase III program can become a better prospect than "heads or tails."
Briefly, late-stage adaptive studies include continuous analysis and monitoring of unblinded study data. This analysis is performed by a limited group of statisticians and clinicians kept entirely separate from the traditional study team. If a study's planned enrollment is not sufficient to reach a significant statistical endpoint, the study can be dynamically expanded or extended. Likewise, if a study contains multiple arms or dosage profiles, only the most promising and effective will be fully explored, allowing for a more rapid achievement of a desired endpoint and limiting subjects' exposure to ineffectual or dangerous treatment courses. In fact, leveraging the ability to increase and decrease treatment groups in a targeted manner can even unlock the ability to fulfill the requirements of multiple Phase II and Phase III studies within a single protocol.
To further illustrate the shift from retrospective to prospective decision-making, consider a study where unblinding occurs after the last subject visit. In this case, months or years have passed since the study's launch, during which time virtually no useful information is available to evaluate success or failure. The same can be said for a planned interim analysis that occurs long after the study's initiation at a time point that offers at best a guess regarding whether useful decision-making data is available.
With an adaptive study, data is continuously evaluated to ensure decisions are made as quickly as possible to adjust dosages and treatment arms, potentially months or even weeks into a study.
Prospective decision-making also offers the potential for significant efficiencies in resource allocation and faster overall time-to-market. This is dramatically apparent when looking at its ability to shorten the decision-making timeline within a single study, and the benefits extend to how those studies fit together in a program and how multiple programs overlap and compete for resources in an organization's total portfolio.
This retrospective-to-prospective shift is similar to the shift seen in managing the supply chain with the maturation of sophisticated enterprise resource planning (ERP) systems. Technology and process precedents exist for making prospective analysis of vast global data accession a reality. Just like the electronics manufacturer shaving 10% off their overhead by more effectively managing inventory with ERP, the pharmaceutical company requiring 10% fewer subjects before receiving drug approval gains significant competitive advantage.
For the industry to take these adaptive concepts and make them an operational reality, the processes and systems that are the foundation of clinical trials must change. Organizations looking to make prospective decision-making the norm rather than the exception have several barriers to overcome. A well-implemented clinical data management backbone, including tightly integrated EDC and a clinical data management system (CDMS)—ideally in a single system—can be used to efficiently and economically overcome many of them.
The need for tight EDC and CDMS coupling begins with some of the typical requirements for any trial, whether employing adaptive techniques or not. The CRF itself may be only a fraction of the data required for analysis. Labs, patient reported outcomes (PROs), and core readings typically loaded into a CDMS "back end" must be integrated with CRF data captured in the "front-end" EDC system. Again, this integration and reconciliation can be handled long after the data from various sources has been collected.
This process of data integration and reconciliation between laboratory information management systems, interactive voice response (IVR) systems, CDMS, and EDC is one, as most data managers can attest to, that is often nontrivial and vastly time consuming for teams. The luxury of time is eliminated in an adaptive trial—every delay not only prolongs time to decision-making, but also adds additional opportunity costs amplified by the inability to make changes that increase the chances for a successful outcome. This means all data should be continually integrated, including firing edit checks in real-time across both interactive and batch-loaded data sources. A central system with combined clean CRF and auxiliary data becomes the authoritative source for adaptive analysis.
Beyond the data sources mentioned, an adaptive trial also requires continuous integration and availability of randomization data. Again, in a traditional study randomization, schedules and subject assignments are often collected in entirely separate systems, finally brought together after the last patient has their final visit. Much like lab or PRO data, the task of combining randomization data with other subject data can require reconciliation beyond the simple merging of tables or listings from two systems. To bring this information together in as close to real-time as possible, that reconciliation must constantly be monitored and managed.
With a broad swath of sponsors and trials, the value brought by EDC has been thoroughly demonstrated, allowing for real-time complex edit checks to monitor similar reconciliations (e.g., ensuring there is a concomitant medication for every adverse event treated by a nonstudy drug). Applying a similar technique to randomization and CRF data is computationally trivial. However, randomization data has the unique requirement that it must be hidden from the vast majority of people working within the study. Only the team of statisticians and clinicians preparing analysis and recommendations for extending, shortening or altering the study can be exposed to the otherwise blinded randomization data.
Therefore, if randomization data is incorporated in the EDC/CDMS core system, it must be hidden from a nurse coordinator or monitor accessing electronic CRFs. Similarly, a clinician working on the study may want to access tabulated data that also obscures the study arm, dosage or other treatment status for all subjects. But the adaptive decision makers, or those making adaptive recommendations, must be able to browse and review the entire patient record on a subject-by-subject basis or in tabular form (i.e., view the CRF, labs, PRO, and randomization within a case from one source).
This requires more sophisticated data models and security controls than those found in most legacy and many current EDC and CDMS platforms. However, systems capable of this type of field-level, role-specific, and workflow-driven security do exist, and the feasibility of this type of security model is seen throughout systems employed in the securities industry (such as over-the-counter equity or bond trading systems).
Even with the right platform satisfying many of these requirements, technology does not, in itself, provide everything an organization needs to implement adaptive trials. But the rapidity of data availability in EDC, the tight coupling of EDC and CDMS, and a security model deep enough to accommodate the requirements of managing a blind study along with other data sets the stage for building new processes and procedural controls in adaptive study design, execution, and submission.
Combining the competitive advantage that adaptive studies can bring today with the likelihood of their future operational necessity makes a compelling argument, when evaluating EDC and CDMS systems, to include their ability to meet these new requirements. Much like the critical path Phase III compound with a 50% chance of successfully getting to market, the assurance that an organization's study infrastructure is ready for the shift from retrospective to prospective decision-making can and should be better than "heads or tails."
Glen de Vries is chief technology officer with Medidata Solutions, 79 Fifth Avenue, 8th Floor, New York, New York 10003.
For more information on this subject, the transcript of a speech given by former FDA Deputy Commissioner Scott Gottlieb, MD, on adaptive clinical trials is available online at http://www.fda.gov/oc/speeches/2006/trialdesign0710.html. In addition, a summary of Bayesian statistics—a key component of adaptive design—is available online in "Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials: Draft Guidance for Industry and FDA Staff," http://www.fda.gov/cdrh/osb/guidance/1601.html.
Related Content:Trial Design