Targeting Source Document Verification

Applied Clinical Trials

Applied Clinical Trials, Applied Clinical Trials-02-01-2011, Volume 20, Issue 2

Monitoring of clinical trials is a federally mandated responsibility of trial sponsors and a core offering of contract research organizations (CROs) that is crucial to the validity of clinical research.

SPOTLIGHT EVENTRisk-Based Monitoring – Part Two In Depth ReviewMarch 13, 2014
Cambridge, MassachusettsDownload BrochureRegister

RELATED
- Increasing Intensity of On-Site Monitoring a Troubling Trend
- Industry Metric Indicates Low ROI with Full Source Document Verification
- Has FDA Guidance on Risk-based Monitoring Impacted SDV Coverage Yet?More in Risk-Based Monitoring

Monitoring of clinical trials is a federally mandated responsibility of trial sponsors and a core offering of contract research organizations (CROs) that is crucial to the validity of clinical research. Source document verification (SDV)—the comparison of reported trial data with information from primary health records of trial subjects—is an important component of trial monitoring intended to ensure the integrity of trial data. Sponsors and project managers should develop SDV strategies for each trial that comply with regulatory requirements and accommodate the size, complexity, design, and purpose of the trial.

One hundred percent SDV, the comparison of each data point on every case report form (CRF) to subject medical records, may not be appropriate for most large, multi-center trials. Targeted SDV—the verification of critical trial data, including study endpoints—has the potential to improve safety oversight, data quality, regulatory compliance, protocol adherence, and overall trial validity while reducing costs and the time to database lock for large, multi-center trials.

The Guidance on Good Clinical Practices (GCP), developed by the International Conference on Harmonization (ICH), requires that trial monitors have access to and can review source documents. This guidance, ICH E6, has been adopted by both the Food and Drug Administration (FDA) in the US Code of Federal Regulations (CFR) under Title 21 and by the European Union (EU) as part of the EU directive on clinical trials. Guidance ICH E6 and the regulatory authorities that have adopted it, refer to source documents (i.e., primary health records, in the sections on investigators, sponsors, trial protocols, and essential documents).

According to the E6 guidance, source documents must be kept in good order and investigators must make source documents available to the sponsor and monitors working on behalf of the sponsor. Investigators are responsible for ensuring that the data reported on CRFs is consistent with source documents,1 and the sponsor is responsible for ensuring that each subject has provided written consent to direct access to his or her medical records.2 Sponsors must also ensure that the trial protocol or other written agreement specifies that investigator(s)/institution(s) will allow trial-related monitoring3 and that the monitors verify the source documents are accurate, complete, up-to-date, and maintained.4

Source documents are used to achieve two explicit regulatory objectives: to document the existence of the subjects and to substantiate the integrity of trial data.5 Both objectives depend on effective SDV by monitors. The most effective strategies for SDV depend on the particulars of each clinical trial. While 100 percent SDV is not required by law, industry standards maintain 100 percent SDV as the most straightforward approach to regulatory compliance. However, the FDA guidelines for monitoring clinical trials states, "...the monitor should compare a representative number of subject records and other supporting documents to the investigator's report..."6

FDA guidelines explicitly refer to a representative number of subject records, not all subject records. The Department of Health and the Medical Research Council in the United Kingdom announced, "verifiable...does not imply that every item of data recorded must be supported by a source document or checked."7 The number of subjects, the experience of the clinical site, the clinical endpoints, and the nature of ancillary data are several of the factors that should be considered when developing a strategy and protocol for a project-specific SDV plan.

Effective targeted SDV

Under several conditions, targeted SDV may be more appropriate and effective than 100 percent SDV for large, multi-center trials. Targeted SDV prioritizes critical data and uses random sampling methods to select data for SDV during an on-site monitoring visit. Trials with many sites, and a large volume of data per site, may require a combination of targeted SDV and extensive statistical monitoring of data as it accrues to ensure the quality of the data generated.

A report on diversifying monitoring methods states, "central monitoring of data using statistical techniques may help to identify departures from expected patterns which might suggest incorrect procedures, or even data fraud, thereby identifying sites that require further investigation."8 The internal data review processes that routinely lead to individual data queries can employ rigorous statistical methods to investigate overall quality and integrity of the entire trial data set and subsets of trial data.

Trials employing highly experienced clinical sites, sites that have previously demonstrated the ability to recruit and retain subjects, to effectively train on-site trial staff, and to report complete and accurate data, are also appropriate candidates for targeted SDV strategies. Finally, there should be no statistical difference between verified and unverified data9 and there should be a low error rate on the CRFs if a targeted SDV strategy is in use.

Fixed fields approach to targeted SDV reviews all entries in specific CRF data fields that pertain to primary and secondary clinical endpoints and all unexpected adverse events (AEs). This approach includes 100 percent SDV for all fields for the first one or two subjects enrolled at each site to assess data quality and potential staff training deficiencies. When employing fixed field SDV, discrepancy tracking can be utilized to signal the need for remediation at a particular site. For example, discrepancy tracking may reveal specific errors requiring an on-site visit by the monitor that results in a temporary suspension of targeted SDV and a deployment of 100 percent SDV for all CRF fields. Retraining of the site staff is often the answer in addition to closer extended monitoring of the site for remediation and the resumption of the targeted SDV protocol.

Random field selection, another approach to targeted SDV, utilizes random statistical sampling to select CFR fields for SDV during a site visit. Inclusion/exclusion criteria, informed consent forms, and all serious AEs are subject to 100 percent SDV, as are all CFR data fields for the first one or two patients enrolled. As with fixed field SDV, discrepancy tracking can lead to remediation at a site including the temporary or permanent deployment of 100 percent SDV for all CRF fields.

The following are some examples of the techniques commonly used to produce a random sample. They can range from a simple pre-specified or fixed approach to a more complex strategy depending on the goals of the study.

An example of a pre-specified/fixed sampling would be selecting one subject from the first five enrolled, then another from the next five enrolled, and so on. This can be considered the easiest of sampling techniques, and of course there are pros and cons to consider. For example, selecting subject numbers 1, 6, 11 is preferential to selecting subjects 5, 10, 15 as the later does not give the reviewer an opportunity to identify data issues early on, and for the obvious reason that not all sites will reach their grouping's upper limit. This approach can be applied at the site or whole-study level. When applying this technique in a whole-study approach, you run the risk of not seeing the low-enrolling sites, and focusing more on a higher-enrolling site than is intended.

There are more complex strategies that can mitigate the shortcomings of the simpler sampling techniques. A complex strategy will take more effort to administrate, however, the result is a more evenly distributed sampling. For example, and in the context of a site-level sampling, one could select the first subject at each site, then one from the second through fifth, one from sixth through 10th, and so on until all subjects (except the first from each site) have had an equal chance of being selected. Subjects that were not selected in any round of sampling can be grouped for further sampling until your SDV target percentage is reached.

Implementing targeted SDV

Both approaches to targeted SDV require several common conditions for successful implementation. The trial must have clear and focused objectives, along with valid endpoints, for targeted SDV to work. This focus will help project managers establish criterion that will allow them to identify and prioritize critical data. A well designed CRF (paper or electronic) and sufficient site staff training will increase procedural standardization and reduce error rates. The monitoring plan should include a detailed SDV protocol with a site remediation plan. Finally, centralized statistical monitoring and rigorous internal data review can be used to monitor many important parameters, including statistical differences between verified and unverified data and unusual patterns in data at individual study sites.

Implementation of an effective targeted SDV strategy is not without its challenges. The essential element is the acceptance by leadership of a cultural shift in completion of this operational task. The most senior leadership must be willing to strongly support the strategy by taking a visible role in the communication of the new expectations. Stakeholders should be involved in the development of project specific targeted SDV plans and in rolling them out to the staff.

Performance indicators regarding the project specific SDV plans should be tied to objective management. The multidisciplinary team—statisticians, data managers, clinical project leaders, and medical monitors—must be in agreement with the strategy to make it successful. The change will have to be broad, and it will have to be managed, especially in an organization that has always performed 100 percent monitoring of every field. Implementation of project specific targeted SDV strategies may involve changing necessary standard operating procedures (SOPs) that imply 100 percent monitoring expectations.

A clear training strategy will help overcome the inexperience and reluctance to work in a new paradigm that goes against the detailed nature of the staff. Training will be better received and taken more seriously by the staff if it involves representative champions from senior leadership. Education of the internal team should include the protocol specifics; justification for a reduced SDV plan; and review of edit checks and address errors attributed to reduced SDV strategy, as opposed to errors that would occur in a fully monitored study.

Resistance needs to be managed at every level on an ongoing basis. We suggest creating a frequently asked question list created at the critical stakeholder level. It is essential that the leadership be able to explain the rationale and demonstrate the benefits of targeted SDV that outweigh the perceived oversight of data.

Improved trial monitoring with targeted SDV

Targeted SDV strategies may lead to improved performance in clinical studies on many counts. Trial monitoring is both important and multi-faceted. As outlined by the ICH in its E6 Guidance on GCP, monitoring is employed to verify:

  • Adherence to trial protocol, GCP, regulatory requirements

  • Accuracy, quality, and integrity of trial data

  • Subject well-being and to protect subject rights10

The time demands of 100 percent SDV may compromise other important monitoring functions that require on-site visits. In an informal, 2006 survey, monitors working in the field responded that, on average, they spent 46% of their on-site time performing SDV, 13% of that time performing regulatory review, 11% on drug accountability, and 5% on communications with the principal investigators.11 With nearly half the on-site time spent on SDV, other important monitoring functions such as adverse event follow-up, protocol adherence, GCP assessment, and drug accountability may not receive the attention they need. The validity of clinical studies are as dependent on GCP and strict adherence to protocol as they are to the identification of individual errors and inconsistencies in reported trial data.

The ICH E6 guidance on GCP during the conduct of a clinical trial does not mandate 100 percent SDV and the FDA guideline on GCP explicitly calls for the review of a representative number of subject records. The validity of trial data need not be compromised by employing targeted SDV; in fact, some researchers persuasively argue that for large data sets from large trials 100 percent SDV is not only costly, it is less effective than more statistical approaches.8 In the future, the increasing availability of electronic source documents (see the recent draft FDA guidance on electronic source documentation in clinical investigations) will allow much of the SDV activity to be automated and performed remotely.12 While access to and review of source documents is critical to ensuring the existence and informed consent of each trial subject, targeted SDV is an effective, if not superior approach to data verification and validation in trials with large data sets.

Sandra Hines (DiGiambattista) is Director of Clinical Operations at ePharmaSolutions, 625 Ridge Pike, Building E, Suite 402, Conshohocken, PA 19428, e-mail: [email protected]

References

1. Food and Drug Administration, ICH E6 Good Clinical Practice Consolidated Guidance, section 4.9.2 (FDA Rockville, MD, 1996).

2. Food and Drug Administration, ICH E6 Good Clinical Practice Consolidated Guidance, section 5.15.2 (FDA Rockville, MD, 1996).

3. Food and Drug Administration, ICH E6 Good Clinical Practice Consolidated Guidance section, 6.10 (FDA Rockville, MD, 1996).

4. Food and Drug Administration, ICH E6 Good Clinical Practice Consolidated Guidance, section 5.18.4 (k) (FDA Rockville, MD, 1996).

5. Food and Drug Administration, ICH E6 Good Clinical Practice Consolidated Guidance, section 8.3.13, (FDA Rockville, MD, 1996).

6. Food and Drug Administration, Guidance for Industry; Guidelines for the Monitoring of Clinical Investigations (FDA, Rockville, MD, 1998).

7. Medical Research Council and the Department of Health, Joint Project to Codify Good Practices in Publicly Funded UK Clinical Trials with Medicines—Draft Workstream 4: Trial Management and Monitoring: Monitoring Procedures (2004).

8. C. Baigent, F. E. Harrell, M. Buyse, R. J. Emberson, and D.G. Altman, "Ensuring Trial Validity by Data Quality Assurance and Diversification of Monitoring Methods," Clinical Trials, 5 (1) 49-55 (2008).

9. B. Maruszewski, F, Laour-Gayet, J. L. Monro, B.E. Keogh, Z. Tobota, and A. Kansy, "An Attempt at Data Verification in the EACTS Congenital Database," European Journal of Cardio-Thoracic Surgery, 2 (5) 400-406 (2005).

10. Food and Drug Administration, ICH E6 Good Clinical Practice Consolidated Guidance, section 5.18.1 (FDA Rockville, MD, 1996).

11. C. Breslauer, "Could Source Document Verification Become a Risk in a Fixed-Unit Price Environment?" Monitor, December, 2006, 43-47.

12. Food and Drug Administration, Guidance for Industry: Elec-tronic Source Documentation in Clinical Investigations, (FDA Rockville, MD, 2010), http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM239052.pdf