Benchmarking Sites

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-07-01-2008
Volume 0
Issue 0

How one research project in South Africa developed its own benchmark assessment tool to gauge performance and compliance across sites.

Related Articles

1. Monitoring the Monitors

2. Investigator Site Audit Performance

3. Measures of Success

Project Phidisa is a clinical research project focused on the management and treatment of HIV infection in the uniformed members of the South African National Defence Force (SANDF) and their dependents. This is a joint research collaboration between the SANDF, the United States Department of Defense, the United States Department of Health and Human Services, the National Institutes of Health, and the National Institute of Allergy and Infectious Diseases. The project currently encompasses an epidemiology study for HIV positive or negative individuals and a treatment protocol for individuals infected with AIDS.

(Photography: Getty Images Illustration: Paul A. Belci)

In accordance with the Guidelines for Good Practice in the Conduct of Clinical Trials in Human Participants in South Africa,1 clinical research monitors are appointed by the trial sponsor to perform quality assurance activities related to the conduct of the trial. As such, any noncompliance with the protocol, standard operating procedures (SOPs) or applicable regulations should be communicated to the study team and remedied immediately. The sponsor is obligated to terminate the participation of an investigator and/or site if persistent noncompliance is noted.

To evaluate site compliance, a matrix of performance measures (benchmarks) was established for Project Phidisa in July 2005. These benchmarks are a tool for members of the Phidisa Project to prioritize and review the outcomes of the current compliance assessments and to evaluate any interventions that were implemented to decrease noncompliance. Evaluating clinical site performance based on the benchmark tool has had many benefits, though limitations of the tool have also been realized.

Table 1. Parameters are measured against a target standard.

Tools of the trade

Benchmarking has become a popular mechanism in the business community to increase performance by evaluating internal practices and measuring them against standards set as performance metrics. Business executives have realized the benefits of a metrics-based approach to evaluate the performance of operations in order to maximize productivity. Performance metrics provide a numerical standard against which critical values can be compared. Over the past several years, this tool has been adopted by the clinical research industry with the goal of maximizing site performance.

Although only a few articles have been published on this topic, much of the literature that exists focuses on laboratory selection,2 site selection, and site maintenance. Several clinical research organizations (CROs) have used this tool to allocate site payments, as they are determined by evaluation of performance metrics.

One research study outlined the use of benchmarking clinical trials in Central and Eastern Europe.3 This research evaluated site benchmarking analyses as related to speed (number of days to obtain review approvals to the first subject visit), quantity (number of subjects a site can enroll), and quality (number of queries per case report form [CRF]). The study concluded that benchmarking provided an effective tool for improving data quality, reducing time to market, and realizing a decline in development costs.

Evolution of benchmarks

The Phidisa Project is a government-sponsored trial that does not seek to market a particular drug. The primary aims of benchmarking in this setting are to improve data quality and increase subject safety. The benchmarking strategy in this project is designed to establish site goals, identify areas for improvement, highlight achievements, and utilize information to drive process enhancements.

In accordance with Phidisa protocols and regulatory requirements, Phidisa active sites are monitored at regular intervals. During routine monitoring visits, recorded data is reviewed and validated, processes implemented at sites are assessed, and protocol compliance and adherence to regulatory requirements are evaluated. Findings are documented in a monitoring report and submitted to the sponsor through the regulatory department.

Monitoring findings, which could potentially impact the conduct of the study and outcomes, were noted during the first two years of the Phidisa Project. These findings related to:

  • Clinic processes and administrative issues: management of patient files and source documents

  • Study conduct: eligibility assessment, informed consent, and reporting of reportable events

  • Subject management: management and follow-up of underlying conditions, co-infections, and psychosocial support

  • Subject follow-up: management of missed follow-up visits

  • Staff issues: sufficient resource and distribution of workload.

Monitoring reports were detailed but lacked a quantitative assessment of site performance. In addition, as the study progressed and sites were added, quantitative assessments were needed to ensure that key protocol activities were being performed according to comparable standards across all sites. Therefore, based on monitoring findings, regulatory requirements, and GCP fundamentals, the benchmark assessment tool was developed. It consists of 12 parameters that can be evaluated and reported quantitatively, and each parameter is measured against a target standard as shown in Table 1.

Current evaluation process

Benchmark assessments are completed during each monitoring visit, tabulated, reported to the clinical site team during the exit meeting, and commented on in the final monitoring visit report. Benchmarks are assessed as follows:

  • Informed consent process. The number of new informed consent forms (ICFs) completed correctly (according to protocol, Phidisa SOPs, and GCP requirements) is compared to the total number of new ICFs obtained since the last monitoring visit.

  • Unreported protocol violations. The number of protocol violations properly reported by the site is compared to those found while reviewing the CRFs and source data during the monitoring visit.

  • Reportable events. The number of properly reported adverse events (AEs) and serious adverse events (SAEs) compared to those found while reviewing the CRFs and source data during the monitoring visit.

  • Study endpoints. The number of properly reported study endpoints (reported on the same CRF page) compared to those found while reviewing the CRFs and source data during the monitoring visit.

  • CRF completion and submission. The number of CRFs received by data management (DM) as compared to the number of CRFs expected (according to visit schedules). This is determined by DM.

  • Drug accountability. Pharmacy drug accountability records are compared to actual drug counts for 23 antiretrovirals dispensed to participants. Details of discrepancies are given to pharmacy staff and clarifications taken into consideration before the benchmark is finalized.

  • Action on critical alert laboratory findings. This assessment is based on documentation of appropriate subject management after the site investigator is informed of a critical laboratory value versus alerts not acted on appropriately.

  • Monitoring queries. Queries remaining on site following a monitoring visit are evaluated during the subsequent visit for completion. The number of queries that have been completed is compared to the number of queries generated.

  • DM queries generated. The number of errors or queries generated by DM on receipt of CRF pages is compared to the number of CRF pages received for a given period.

  • Data management queries addressed. The number of queries generated versus the number of queries answered during a given period—it usually precedes the period covered by DM's query assessments.

  • Participant visit attendance. Identifies the number of participants who did not have data collected within the defined visit window period. Monitors check to see if data were collected but not submitted versus data not collected.

  • Follow-up of participants who miss scheduled visits. If a study visit was not honored, the source documentation should show that the site staff have implemented adequate measures for follow-up in order to determine the cause for nonattendance and to arrange a subsequent visit date.

  • Participants lost to follow-up. The cumulative number of participants lost to follow-up is compared to the number of participants enrolled.

Benefits of objective benchmarks

Project Phidisa is a large, multicenter, multisponsor, multicountry clinical research project. There has been much debate as to whether cumbersome, lengthy monitoring reports documenting monitors' activities truly assist in assuring subject safety, human subject protection, and data integrity. Given that capacity building is one of the priorities for Phidisa, identifying areas for improvement is considered critical to the success of the program.

The benchmark assessment tool was developed to record quantitative evaluations in crucial areas. Each of these areas reflects critical components of GCP compliance, participant safety, and data integrity. Primarily, the tool was used to evaluate performance and target areas of improvement at the largest recruitment sites (greater than 1000 participants in two years). The outcome of the assessment provided tangible results that were used to target corrective action in areas needing improvement not only at the site level but also within the various sections or departments that supported the trial. The assessments were repeated at regular intervals to evaluate the success of corrective action and improvement at the site. Depending on the outcome, further changes were made to ensure the site could attain the standards set by the benchmarks.

Benchmark case study one. Benchmark assessments initiated in July 2005 quantified the backlog in the resolution of monitoring queries. Underlying causes were identified and corrective action implemented. There was a drop in monitoring query resolution at the site in August and September 2005 while corrective actions were being implemented. A directed, concerted effort was required by all site staff, augmented by a team of experts to resolve the issues. One of the most important lessons learned was that early warning signs handled proactively can prevent recurrences of backlogs.

This exercise was not only used to address the query backlog but also to train staff and included active involvement of the DM staff. Additionally, the exercise was successful in identifying further obstacles the site and DM staff were facing. Figure 1 shows query resolution improvement at the site. After the successful implementation at the site, the value of the benchmark assessment tool was realized and was subsequently implemented at all active sites. This allowed for evaluation of each site's performance and comparison between sites.

Figure 1. Sequential measurement of site performance related to monitoring queries shows improvement over time.

Benchmark case study two. In the Phidisa protocols, subjects are scheduled to return for follow-up visits at intervals ranging from one month to 12 months depending on the protocol and stage of the protocol. After the initial months of rapid recruitment, the site's focus moved from recruitment to retention of subjects and assessment of nonattendance and reasons for it.

Each protocol visit is assigned a protocol-defined visit window period. DM provided each site with the protocol visit schedules relative to the enrollment or randomization date of each subject. Figure 2 shows improvement of follow-up on missed visits at three different sites (benchmark assessment for Site 1 was not conducted in October 06).

Figure 2. This benchmark assessment provides a valuable tool to determine whether site procedures to follow up with subjects on missed visits are being implemented effectively. If site management is adequate, then other underlying problems such as protocol design could be the cause for poor visit compliance.

Initial assessments conducted at the sites revealed a range of causes for poor compliance follow-up, from nonsubmission of a completed CRF to inefficiencies in basic site procedures, and also included:

  • Scheduling appointments outside visit window periods

  • Appointments not recorded in appointment books

  • Contact details for subjects not recorded

  • No process in place to determine if visit had been missed

  • No process in place for contacting subjects who missed visits.

These inefficiencies were addressed with a compilation of procedures and guidelines that could be applied across sites. Training was provided in January 2006, and clinical trial site staff were assigned the responsibility of follow-up of missed visits.

Once these corrective measures had been implemented, persistent missed visits were once again analyzed. Sites were able to track and follow up on missed visits, but participants were still not attending scheduled visits. Analysis revealed the following underlying causes:

  • Access to clinics posed a problem for participants especially in the rural sites with traveling distances in excess of 200 km. This was, however, not unique to the rural sites but was found in large central hub sites that serviced a large geographical area. Participants experienced problems with access to transport, cost of transport, travel time to and from sites, and time away from work.

  • Participants in advanced stage of the disease or experiencing opportunistic infections were too weak to travel even short distances and then wait to be seen in a clinic.

  • Clinics were extremely busy and extended waiting periods caused participants to default on appointments.

  • Side effects of study drugs/poor compliance resulted in participants not wanting to attend visits.

Active involvement of support structures within the military became essential in addressing these problems. Community health workers, social services, and chaplains now play an integral role in participant support at each of the sites. Additional staff were appointed and education has been emphasized at each protocol visit.

Advances and limits

Each section of the Phidisa research team has found value in the benchmark assessment tool, which is now routinely used to evaluate, monitor, and report on various aspects of clinical trials. But objective benchmarks have limitations.

Resource requirements. Study monitors, site staff, the DM team, and study leadership must all dedicate time to the assessment of benchmarks. Initially, resources are required to develop and assess the implementation of the benchmarks. Apart from this initial implementation phase, resources are then spent on the preparation for each monitoring visit, assessment of benchmarks during the monitoring visit, and the subsequent collation of results and formulation of the tables and reports. Average additional time required to implement benchmarks at a given site for this study:

  • Monitoring visit preparation: 10%

  • Monitoring time on site: 25%

  • Collation of results and formulation of the report: 35% to 40%.

At times this may seem to divest resources from achieving the standard in order to measure the performance. However, many of the activities included in the benchmark reports are ones that would require site follow-up in any case. The reporting mechanism simply provides an added incentive to focus the resources into this review and quality control process.

Limitations in the interpretation of the benchmarks. Benchmark figures cannot be interpreted in isolation. It requires a comprehensive assessment, which necessitates the benchmark report be evaluated in synchronization with the monitoring visit report for a complete and accurate understanding.

The analysis of figures (benchmark report) must be compared with action information (monitoring visit report). The benchmark reports serve as a tool to measure performance, and once standards have been set, sites are encouraged to maintain those standards or improve on them. This requires understanding by all site personnel with respect to benchmark report objectives. It is expected, however, that fluctuations in the benchmarks obtained will occur.

Conclusion

The Phidisa Project research team has determined that benchmark assessments are extremely constructive and pragmatic. This is particularly evident in a collaborative, resource-poor, multisite setting in which capacity building is a primary component for conducting a successful clinical trial.

The implementation of such tools aids in the evaluation of compliance and provides a basis for the prioritization of areas requiring further assessment and intervention. Benchmarks allow for objective comparisons at and between sites, as well as from one monitoring visit to the next. Most importantly, benchmark assessments can serve as a launching pad for cross-site and leadership discussions regarding opportunities for site improvement and development of processes. Ultimately, benchmark assessments contribute to the enhancement of clinical research compliance and strengthen site and research capacity, particularly in resource-poor settings.

Acknowledgements

This project has been funded in whole or in part with federal funds from the National Cancer Institute and National Institutes of Health (NIH) under contract N01-CO-12400. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products or organizations imply endorsement by the U.S. Government. This research was supported in part by the National Institute of Allergy and Infectious Diseases.

Shelly Simpson* is clinical trials director, Clinical Monitoring Research Program, SAIC-Frederick, Inc., NCI-Frederick, Frederick, Maryland 21702, PO Box B, email: shsimpson@mail.nih.gov Lorraine Africa, MS, is a member of regulatory services and Lotty Ledwaba, MD, is head of regulatory services, Phidisa Project South Africa. Anita Lessing is project manager and Siza Mphele and Jenny Thomas, RN, RM, are clinical research associates at Clindev Pty Ltd. Laura McNay is director and Judith Zuckerman, RN, BSN, CCRC, is clinical research oversight manager of the Office of Strategic Planning and Assessment, Division of Clinical Research, National Institutes of Allergy and Infectious Diseases, National Institutes of Health.

*To whom all correspondence should be addressed.

References

1. www.doh.gov.za/search/index.html.

2. M.M. Englehart, A.J. Santicerma, J.E. Zinni, "Performance Metrics: Optimizing Outcomes," Applied Clinical Trials Supplement, October 2005, 6-9.

3. D. Babic and Iva Kucerova, "Benchmarking Clinical Trials Practices in Central and Eastern Europe," Applied Clinical Trials, May 2003, 56-58.

© 2024 MJH Life Sciences

All rights reserved.