In August 2012, a clinical trial was initiated under US and Canadian Investigational New Drug Applications (IND). The protocol was operationally designed for the clinical sites to perform direct data entry (DDE) of subject data at the time of the office visit, and for the clinical research associates (CRAs) to execute risk-based (adaptive) monitoring (RBM). For DDE, the trial used Target e*CRF for electronic data capture (EDC) of case report forms (CRFs), Target e*Clinical Trial Record (e*CTR) as the subject’s eSource record and Target Document as the electronic Trial Master File (eTMF). After meeting with the U.S. Food and Drug Administration (FDA) and Health Canada (HC) to review both the protocol, the use of RBM and the proposed eSource methodology, a multicenter clinical trial was initiated in the US and Canada. The study was performed at 18 clinical sites which screened 656 subjects in order to treat 180. All of the clinical sites were required to use DDE to enter the trial data at the time of office visit. The CRAs were trained on how to conduct risk-based on-site and central monitoring which was clearly defined in the clinical monitoring plan. Results from the study indicated that DDE at the time of the office visit and RBM allowed for acceptable levels of protocol compliance and data quality. As a result of the daily and weekly central monitoring activities: there was close to 100% compliance with all protocol requirements; the need for protocol amendments was identified and implemented rapidly when just a few subjects were enrolled; modifications to EDC edit and logic checks were completed early in the study which minimized issuing the same query multiple times, and; the CRAs and site personnel were retrained based on findings made during the weekly quality by design (QbD) review meetings. This paper supports the rationale for RBM integrated with eSource methodologies, and for the pharmaceutical industry to move in the direction of the paperless clinical trial. Once RBM and DDE are adopted, there will be a major reduction in monitoring resources and costs needed to manage a clinical trial, with no loss of quality.
IntroductionIn order to support the transformation of how the pharmaceutical industry manages the performance of clinical trials, in 2013, the Food and Drug Administration (FDA) issued its Final Guidance for Industry: Oversight of Clinical Investigations - A Risk-Based Approach to Monitoring, 1 and a Guidance for Industry: Electronic Source Data in Clinical Investigations. 2 These guidances are consistent with the European Medicines Agency (EMA) Reflection Paper on Risk Based Quality Management in Clinical Trials 3 and Expectations for Electronic Source Data and Data Transcribed to Electronic Data Collection Tools in Clinical Trials. 4
In 2011, a critical publication on the varied practices of monitoring clinical trials 5 was published by the Clinical Trials Transformation Initiative (CTTI), a public-private partnership formally established in 2008 by the FDA and Duke University, to identify practices that, through broad adoption, will increase the quality and efficiency of clinical trials. This publication has served as an impetus for the pharmaceutical and device industries, together with the regulators, to address the monitoring of clinical trials, which as currently practiced, is inefficient, costly and thus unsustainable. 6,7 In 2010, a comprehensive paper of the pros and cons of risk-based monitoring (RBM) was published 8 with a follow-up paper imploring the Industry that it is time to change to RBM. 9
Quality by Design (QbD) is a concept first outlined by Joseph M. Juran and is based on the premise that quality should be part of the project planning process. 10 According to Juran, most quality crises and problems are due to the way quality is originally planned. While QbD methodologies have been used to advance product and process quality in every industry, they have most recently been adopted by the US FDA for drug manufacturing. 11 In clinical research the protocol identifies the quality requirements and detailed plans and activities complement what is in the protocol. Key factors to QbD methodologies include a well-designed protocol, proper execution of the protocol, steps to assure protocol compliance, corrective and preventative action methodologies and clear and concise communication strategies.
FDA has recently published a paper on Quality by Design (QbD) methodologies describing how clinical research is changing, and how FDA and other regulatory authorities are fostering these changes. 12 The eClinical Form 13 and Transcelerate 14 have recently published very thoughtful papers on how the pharmaceutical and device industries could address RBM. Just recently, an approach to quality assurance in the 21st century was published in the Monitor describing a QbD methodology for clinical research. 15
Results from a Phase II study using RBM and DDE, where the clinical site entered each subject’s data into an electronic data capture (EDC) system at the time of the office visit, demonstrated a major reduction in on-site monitoring compared to comparable studies that use paper source records; that EDC edit checks were able to be modified early in the course of the clinical trial; and that protocol compliance issues could be identified in real time and rapidly corrected. 16 The use of DDE and near real-time monitoring also led to rapid detection of safety issues. The clinical site reported major cost savings, and estimated that just in terms of data entry, the site was able to save 70 hours of labor by not having to transcribe data from paper source records into the EDC system. 17
The current paper reports the results of a clinical trial initiated in both the US and Canada which included 18 clinical sites and 180 treated subjects, where all of the clinical sites performed DDE, all of the CRAs performed RBM where the bulk of the monitoring activities occurred centrally from the home office.
In addition to a well-designed protocol, the following QbD elements were operationally incorporated into the clinical trial:
Risk-Based Clinical Data Monitoring Plan (CDMoP)
A written strategy was developed to address the review of site-specific source data/documents, the schedule for on-site monitoring, the frequency of central monitoring and the issuance of central monitoring reports. The CDMoP specified roles and responsibilities as well as the specific monitoring requirements to ensure that the clinical sites complied with the study protocol and regulatory requirements.
The CDMoP also indicated that monitors were to record all monitoring reports in the EDC system, and that all sponsor and study documents were to be maintained in the Electronic Trial Master File (eTMF).
Within the CDMoP, a risk mitigation strategy identified a total of 23 risks to subject safety and/or trial outcome. Each risk was assigned a low to high probability score (1-3) and severity score (1-3). Each risk was then assigned a score which was a multiple of the two scores, as well as a risk mitigation strategy. For example, Subject Dropouts was one risk to the trial outcome since any dropout was potentially to be considered a treatment failure. Therefore, Subject Dropout was assigned a severity score of 3 and a probability score of 2, for a total score of 6. The risk mitigation strategy was “Training and Evaluating and Resolving Reasons for Dropouts, Phone Alerts Prompted by the eCRF and Review of Online Management Reports.”
The CDMoP documented that the study would use DDE at the time of the clinic visit. The eClinical Trial Record (eCTR) allowed the clinical study sites to have a contemporaneous electronic copy of the subject’s record. To comply with regulations, access to the eCTR was controlled by the clinical investigator or designee and not the pharmaceutical company sponsoring the study, and these original data were stored in “a trusted, third-party repository” prior to the data being transmitted to the EDC database.
Initially, weekly meetings were held with key team members to review all monitoring activities. Integrated online data management reports addressed safety, quality, compliance and study specific issues. As the study progressed, the frequency of these meetings was changed to every two weeks.
On-site and Central Monitoring Activities
As part of the approach to RBM, the CDMoP identified the need to perform both on-site and central monitoring. To avoid the need to duplicate data already within the EDC system, key metrics from the EDC system were displayed within the monitoring reports. The monitoring reports were generated online within the EDC portal and signed electronically by the CRA and the CRA’s supervisor. Lists of observations requiring followup were also maintained within the EDC portal.
A detailed Safety Monitoring Plan was developed. There was nothing unique in this approach to safety monitoring except that an Adobe Acrobat version of an FDA-approved online MedWatch Form 3500A and CIOMS Form 1 could be generated directly from the EDC system for both original and followup reports. Both the investigator and the Medical Monitor could enter online narratives and the Medical Monitor could control the finalization of the original and followup reports needed for regulatory submissions. These reports became an integral part of the EDC system and could be retrieved on demand, based on permissions, anywhere in the world. In addition to the agreed-upon procedures involving serious adverse event (SAE) reporting to both the sponsor and regulatory authorities, email alerts occurred at the time of data entry for any SAE and if any SAE data were modified. In addition, the EDC system summarized all adverse events and it was possible to assess adverse events across sites.
As part of the QbD methodology, initially, weekly meetings of approximately one hour occurred with the clinical team (n=3), the sponsor (n=2) and an outside expert who performed quality oversight (n=1). Over the course of eight months, this represented a total of 20 meetings and 80 hours (two weeks) of human resources. This effort was roughly equivalent to three on-site monitoring visits.
Time to Data Entry from the Visit Date
One of the key advantages never consistently accomplished with EDC, was the ability to have rapid access to the clinical trial data from the time of the office visit. With DDE, the site was “forced” to enter the data at the time of the office visit. However, as this involved a change in behavior at the clinical site, there was no guarantee that the sites would comply. Therefore, the time to data entry from the day of the clinic visit was assessed. However, not all data could be entered directly at the time of the office visit since sites maintained certain source records outside the EDC system. As a result, some of the data associated with these source records were entered after the clinic visit. For example, unreported medical histories and medications were identified during “chart review” at the time of the monitoring visit.
In spite of DDE being a “disruptive innovation,” 92% of data were entered on the day of the office visit and 95% within five days and 98% within eight days (see Figure 1). Some of the outliers were due to findings during the monitoring visits and delays in data entry when the sites waited for additional information to complete a form.
Figure 1 Time to Data Entry from the Day of the Office Visit
Time to Data Review
The time to data review by the monitors is a key factor to optimize RBM, since without having access to real-time data, the same errors are repeated and any corrective actions are delayed. Key forms included in this analysis were:
- Subject Registration
- Medical History
- Visit Date
- Vital Signs
- Clinical Summary
- Drug Administration
- Pharmacokinetic (PK) Sampling
A total of 13,124 forms were analyzed from 180 subjects (see Figure 2). Results showed that 50% of the entered forms were reviewed within 13 hours (0.54 days) of data entry, 75% within 27 hours (1.1 days), 95% within 124 hours (5.2 days) and 100% within 335 hours (14 days). It should be noted, however, that occasionally a form was “missed” by the CRA and a small number of forms were “saved” awaiting additional information or conclusions based on consultations with the principal investigator (PI).
Figure 2: Time to Data Review (hours) From Day of Data Entry
On-site and Central Monitoring Activities
Between August 1, 2012 and May 31, 2013, 31 on-site monitoring visits were performed at the 18 sites. No other on-site monitoring was deemed necessary based on the observations at the initial on-site visit, daily review of online eCRFs, in-house review of the eTMF, and site audits by Quality Assurance. The bulk of the 2nd monitoring visit was combined with the closeout visit since most of the subjects had completed treatment at the time of the visit. The final closeout activities were performed over the phone.
Since measurements on the last day of the three-month treatment phase of the study included evaluation of the primary endpoint, prior to the first subject arriving for that final visit, each site was retrained over the phone as to the required activities taking place on final day of the study. In addition, each site was instructed to inform the CRA when the first subject was to arrive for the Day 90 visit so that the CRA could immediately review all of the data entered on that day. An email alert was also sent to the project team at the time the Day 90 visit date was entered within the EDC system.
A total of 211 central monitoring reports were issued, and once it was clear that both the sites and monitors were adequately trained, the frequency of issuing these reports was changed from every two weeks to every four weeks.
Source Data Verification (SDV)
For this study, there was a total of 27,957 EDC “pages” entered for 29 unique CRFs. As part of the approach to RBM, the CDMoP identified specific data elements collected at the clinical sites either within the electronic medical record (EMR) or on paper charts for SDV.
A total of 5,581 of these paper/electronic source records were reviewed at the site and compared with the clinical trial database. These source records represented about 20% of all entered pages. Results showed that only 13 of the 29 forms had any changes, with a total of 48 changes made to the database as a result of SDV (Table 1). These changes represented a 0.86% “error rate.” The vast majority of the changes (66.6%) occurred in just three forms, medications (13; 27%), medical history (10; 21%) and clinical laboratory result (9; 19%).
Table 1: Summary of Changes Made to the Database Post SDV
In order to evaluate this “0.86% error rate,” Table 2 identifies examples of types of changes made to the database as a result of SDV. As can be seen, only one modification, Titration Result (278.3 changed to 123.2) could have had any impact for the study. However, as this parameter was defined as Critical to Quality (CTQ), a specific risk to protocol compliance and subject safety, a copy of the record was available to the CRA at the same time as the site received it. In addition, all of the changes identified via the SDV process, would have had no impact on subject safety, data integrity or protocol compliance.
Table 2: Itemized Changes to the Database Post SDV
There were a total of 1,099 queries that were generated from 27,966 CRFs entered by the clinical sites. This represents an overall form query rate of 3.9%. However, only 403 (37.6%) of the queries resulted in changes to the database. Thus, only 1.4% (403/27,966) of forms had database changes as a result of queries generated by the CRA.
In order to measure efficiencies of CRA review activities, the time from data entry to query generation was assessed. Strikingly, 39% of queries were generated on the same day as the office visit, 58% within one day and 70.6% within five calendar days. What this really means is that corrective actions were able to occur early and rapidly during the clinical trial.
Figure 3: Cumulative Time from Data Entry to Initial Manual Query Generation
Time from Query Generation to Resolution
Queries were generated in response to an edit check being fired at the time of data entry (auto queries) for which the reason provided by the clinical site required additional information, or as a result of a de novo request for additional information based on clinical review of the CRFs. For example, an ongoing diagnosis of Type 2 Diabetes was reported but no treatment was documented. While online monitoring was done in ‘real-time’ with current snapshots of data, queries were also generated from on online batch edits which were generated, within the EDC system, based on cumulative comparisons of information entered across forms, and over time, that suggested data inconsistencies.
In the previous tables, it was demonstrated that with central monitoring and DDE, it is possible to rapidly enter data and generate queries from the time of data entry. The next challenge was to assure that queries are resolved in a rapid manner. In the following figure (Figure 4), the time to query resolution was assessed for all generated queries including those done manually, those done based on edit checks being fired at the time of data entry (auto queries) and those resulting from batch queries run at night within the EDC system.
As can be seen, with central monitoring, 22% of queries were resolved on the same day they were generated, 78% within five calendar days, 91% within 10 days and 99% within 30 days.
Figure 4: Cumulative Days from Generation of Manual Queries to Resolution
One of the keys to a successful outcome of a clinical trial includes timely data entry and data review, and ideally for data entry and data review to occur at the time of the office visit.
Risk has to do with the probability and impact of an event to the outcome of a clinical trial, and risk mitigation strategies are put in place to manage that risk. Clearly, we should not put the same effort into monitoring variables that “do not matter” as we do into the ones that “do matter.” RBM is not about more or less monitoring visits or SDV, but rather, targeted, efficient and intelligent monitoring. CRAs need to be retrained in their way of monitoring by focusing on the elimination of errors that matter.
DDE can dramatically reduce or even eliminate paper records, and as a result, SDV should also be dramatically reduced. Since SDV typically assesses how well people transcribe from one medium to another, and since such transcription “error rates” are typically below 1%, SDV as currently performed, should have no impact on the study results. However, as part of the risk assessments performed at the beginning and during the study, the rationale and scope of SDV should be defined. SDV requirements will most likely be replaced, in part, with source data review (SDR) or what would be better described as Chart Review. Chart Review truly allows for a snapshot of the study subject where critical information “buried” in the chart can be discovered.
After all, monitoring is all about training and oversight. Think about a typical Phase I PK study. Should we put in the same effort to verify the date of an appendectomy 10 years in the past as we would to verify the time of critical PK draws, storage conditions of the samples and shipping procedures for analysis and methods validation?
The main lessons learned from the study were:
- It is important to identify CTQ measures that had the potential to affect subject safety and protocol compliance at the beginning of the study, to reassess these measures during the study, and train and retrain both the clinical sites who are conducting the study and the CRAs who are performing study monitoring and oversight
- When data are entered at the time of the subject’s visit (as is done when sites use electronic health records) and data review occurs daily, there will be a marked reduction in the need to perform on-site monitoring
- Traditional SDV by itself adds little value to support data quality and subject safety
- Monitors should now act more like auditors when monitoring clinical sites and require new skill sets and training
- Monitoring of clinical trials may become more of an in-house function than an on-site activity as more data become accessible from centralized monitoring activities
- It is both imperative and clearly possible to fundamentally re-think our notions of what it means to monitor clinical trials. Doing so will yield huge benefits.
The following are recommendations to consider when doing RBM and DDE:
- Accept the fact that errors will occur
- Get the data entered into the database in ‘real-time’
- Be flexible and make no a priori assumptions as to the elements of the RBM plan
- Create a quality section within the protocol to define the topline quality plan
- Have a workable detailed clinical data monitoring plan including CTQ parameters, a risk assessment and mitigation strategy, monitoring priorities, frequency of data review and practices to assure that critical procedures and data collection are being done as per the protocol
- Understand that RBM is not about reduced SDV or reduced frequency of monitoring visits, but rather what should be monitored and whether to monitor centrally or on-site
- Train and retrain both the sites and monitors on critical issues and variables on a regular basis
- Meet with regulators (e.g. FDA’s Office of Scientific Investigations at a Type C meeting or EMA for Scientific Advice) to review your risk-based monitoring plan and the protocol quality section. This can be done during the early drug development phase and formal minutes should be generated and maintained.
The present study clearly demonstrates the advantages of RBM and DDE. Beyond potential cost savings, benefits include:
- Improved timeliness to obtain quality data
- Ability to make faster, mid-course corrections to both protocols and EDC systems
- Improved site/sponsor relationships as both sponsor and clinical sites can focus on things that matter and thus more effectively allocate resources
- Ability of the sites to see more study subjects during the day since there is virtually no data entry needed once the study subject leaves the clinic
- The need for less physical resources at the study sites to support on-site monitoring visits
Jules T. Mitchel, MBA, PhD, President ([email protected])
Dean Gittleman, MS, Sr. Director Operations ([email protected])
Judith M. Schloss Markowitz, MS, Senior Project Manager ([email protected])
Timothy Cho, BS Associate Director, Application Development, ([email protected])
Yong Joong Kim, MS, Senior Director of Data Management and Application Development ([email protected])
Joonhyuk Choi, BS, Director of Application Development ([email protected])
Michael R. Hamrell, PhD, MORIAH Consultants ([email protected])
Sergio Dalla Nora Associate Director of Clinical Research, Ferring Canada ([email protected])
Dario Carrara, PhD, General Manager and Head of Virtual Development, Ferring Galeschines Labor AG ([email protected])
The authors want to thank Joyce Hays, MS, CEO and Mark Horn MD, CMO of Target Health for reviewing the manuscript.
For this publication, Target e*CRF® was used for EDC, Target’s e*CRF® Viewer was used to access the eSource records and Target Document was used as the eTMF.
- FDA, August 2013. Guidance for Industry - Oversight of Clinical Investigations - A Risk-Based Approach to Monitoring
- FDA, September 2013. Guidance for Industry Electronic Source Data in Clinical Investigations
- EMA, 2013. Reflection Paper on Risk Based Quality Management in Clinical Trials (EMA/INS/GCP/394194/2011)
- EMA, 2010. Reflection Paper On Expectations for Electronic Source Data and Data Transcribed to Electronic Data Collection Tools in Clinical Trials (EMA/INS/GCP/454280/2010)
- Morrison B, Cochran C, Giangrande J, et al. 2011. Monitoring the Quality of Conduct of Clinical Trials: A Survey of Current Practices. Clinical Trials, 8:342–349.
- Getz, K, 2012. Study Monitor Workload High & Varied With Wide Disparity by Global Region, Tufts CSDD Impact Report. January/February, pp. 1-4.
- Getz, K, 2012. Flying Blind on CRA Workload, Time Demands. Applied Clinical Trials July 1, 2012
- Tantsyura V, Grimes I, Mitchel J. et al. 2010. Risk-Based Source Data Verification Approaches: Pros and Cons. Drug Information Journal 44:745-756.
- Mitchel J and Schloss-Markowitz, J. 2011. Risk Based Monitoring Time For Change. International Clinical Trials February: 22-29.
- Juran, MJ. 1992 Juran on Quality by Design: The New Steps for Planning Quality Into Goods and Services (Free Press).
- Yu, LX. 2008. Pharmaceutical Quality by Design: Product and Process Development, Understanding, and Control, Pharmaceutical Research 25:781-791.
- Ball L, and Meeker-O'Connell A. December 2011. Building Quality into Clinical Trials, Monitor pp:11-16.
- Brothers, JM, Gittleman, DA, Haag, T. et al. 2013. Risk-Based Approaches – Applied Clinical Trials; July/August, 26-38.
- Position Paper: Risk-Based Monitoring Methodology. 2013 TransCelerate BioPharma Inc. (see website)
- Mitchel, J, Gittleman, D, Schloss Markowitz J, et al. 2013. A 21st Century Approach to QA Oversight of Clinical Trial Performance and Clinical Data Integrity, Monitor, December, 41-46.
- Mitchel J, Schloss Markowitz J, Yin H, et al. 2012. Lessons Learned From a Direct Data Entry Phase 2 Clinical Trial Under a US Investigational New Drug Application, Drug Information Journal, 46:464-471.
- Mitchel, J, Weingard, K, Schloss Markowitz J, et al. 2013. How Direct Data Entry at the Time of the Patient Visit is Transforming Clinical Research - Perspective from the Clinical Trial Research Site, INSITE, 2nd Quarter; 40-43.