OR WAIT null SECS
A snapshot of one novel approach in RBM implementation and the data and site performance lessons learned.
Since the August 2011 publication of a European Medicines Agency (EMA) reflection paper and draft FDA guidance on risk-based quality management and monitoring of clinical trials, pharma, medical device companies, and contract research organizations (CROs) have adopted a variety of methodologies, products, and services, launched pilot projects, and encountered challenges both foreseen and unforeseen.1,2 This article provides a snapshot of one risk-based monitoring (RBM) approach and lessons learned from its application in a global Phase III trial involving more than 60 sites and almost 3,400 subjects. A comprehensive account of this monitoring approach is beyond the scope of this article.
This study assessed the safety and effectiveness of a novel contraceptive gel that may not only provide women with a wider range of acceptable contraceptive choices, but also potentially contribute to global health, particularly in regions at high risk for HIV. According to the World Health Organization (WHO) and FDA-mandated labeling requirements in effect since December 2007, the currently marketed nonoxynol-9 vaginal gel may actually increase the risk of acquisition of HIV in some circumstances.3 Therefore, the availability of an alternative woman-controlled contraceptive that does not increase HIV risk could potentially make a great difference in the lives of women who prefer this mode of contraception and are at risk of HIV infection. (The trial assessed contraceptive efficacy and safety and did not address effect of the new product on transmission of sexually transmitted infections. Future studies will assess the effects of the gel on likelihood of contracting HIV and other sexually transmitted infections.)
RBM approach implemented
Health Decisions utilized our Agile Risk-Based Monitoring+ (Agile RBM+) approach on this study. This approach utilizes streaming data, comprehensive performance metrics, and a multivariable model that identifies and adjusts the best predictors of data quality based on actual conditions observed during each trial. Agile RBM+ is consistent with regulatory guidance on risk-based monitoring. In addition to ensuring data quality and patient safety, we have found that Agile RBM+ greatly increases monitoring efficiency, often simultaneously allowing substantial reductions in source data verification (SDV) and increases in data quality as measured by reduced error rates.
Technology plays a critical role in effective RBM. However, the monitoring team is equally important. Health Decisions assigned a multidisciplinary monitoring team to this study. This team included not only the project manager and clinical research associates (CRAs) but also a biostatistician and quality assurance (QA) professional. This article elaborates on the observations of the project monitoring team.
Importance of study individualization
The monitoring team followed standard Health Decisions practice of individualizing the monitoring approach for each study. One central goal is to protect the statistical analysis of the study, planning the monitoring approach in such a manner as to ensure that the study will produce high-quality data that allows assessment of endpoint data. For example, for this study, we identified acceptable quality levels (AQLs) for primary endpoint data and managed monitoring during the study to ensure meeting AQLs, thus protecting the statistical analysis. We adjusted monitoring type, frequency, and intensity to protect AQLs and the statistical analysis.
Note that depending on circumstances during each study, ensuring AQLs for endpoint data might, in principle, require increasing the number of site visits and the percentage of SDV. Our RBM approach is not about arbitrary reductions in SDV for cost reasons. Quality goals govern monitoring decisions. In practice, we have been able to deliver cost savings while also ensuring data quality.
Site performance index (SPI)
A Health Decisions study team manages sites based on a site performance index (SPI) made up of key performance indicators (KPIs) or key risk indicators (KRIs). The component KPIs in the SPI are selected and weighted based on requirements of the individual study and adjusted as necessary to reflect correlation of component metrics with data quality. In other words, we define the SPI initially in a manner that we think reflects the importance of each component to data quality during the trial. The table below includes examples of SPI categories by content area, which provide insight into basis of our calculations. The table shows elements that may factor in calculations of site performance, triggering on-site monitoring visits.
However, the initial definition of the SPI is only a starting point. Individualization of the monitoring approach continues based on what we learn during the trial. The role of the multivariable model at the heart of our monitoring approach is to ensure that the SPI is a useful guide for the current study based on actual conditions. The figure below (click to enlarge) shows the SPI for a site that is performing quite poorly on data entry and poorly on data quality. The SPI shown would prompt the CRA for the site to drill down for details and intervene immediately.
For the global contraceptive study under discussion, the multidisciplinary project team assigned case report form (CRF) delay a weight of 25% of the SPI. Delayed data entry is often a red flag and the study team anticipated that it would be for this study. They further determined that a site’s scoring below a specified SPI level in consecutive months would trigger a site visit and increased SDV. In addition, if a site failed to improve from an unacceptable SPI within two months, the study team halted enrollment at that site. Thus, a site that failed to enter data in a timely manner would have a poor SPI as shown below and the study team would intervene to improve site performance. The site would either improve performance, experience an early site visit and increased SDV or, in the worst case, be informed that it could not longer enroll subjects for the study.
Tracking data quality
As noted earlier, Health Decisions’ approach to RBM correlates KPIs with data quality. The figure below shows metrics of data quality at one site. If, for the study as a whole, the component KPIs and their weighting prove predictive of data quality in the expected proportions, the SPI is doing its job. In this study, timeliness of data entry was highly predictive of data quality and its weighting of 25% of the SPI remained an appropriate guide for site management.
Other important metrics for this study included the protocol deviation rate, as reflected in the figures shown below and as rolled-up contributors to the “compliance” KPI of the SPI illustrated earlier (click to enlarge).
Justifying reductions in visits
We are indebted to Nadia Bracken of ClinOps Toolkit for an astute, experience-based observation on use of metrics and risk indicators to justify reductions in site visits. Bracken observes:
… RBM metrics and “risk-indicators” should not be used to justify canceling monitoring visits or limiting this important oversight activity. Regular on-site monitoring visits according to plan remain imperative to confirm quality data and ensure subject safety.4
In our experience with RBM in this study and in general, we have found that a sound, data-driven RBM implementation can and should replace regular on-site monitoring visits with dynamic scheduling of monitoring visits according to site performance and number of unmonitored fields. We have confidence in our ability to track site performance with a SPI and KPIs and to respond as necessary to ensure high data quality.
As noted, triggering reductions in site visits is not the sole function of KPIs or KRIs-they can also trigger increases in site visits and SDV. Bracken’s observation reflects the perception by many in the industry that RBM is all about cutting costs by reducing SDV and interim monitoring visits. One way to alleviate this concern is to think of RBM as a process for dialing up whatever level of monitoring attention is required to achieve quality goals. The level and type of monitoring attention required will depend on, among other things, the robustness of available remote monitoring capabilities.
Cultural issues at sites
Bracken’s observation reflects her experience with a site issue that can lead to all sorts of problems-failure to enter data in a timely manner. She writes as follows:
... I think that having the monitor on-site is like a carrot-stick approach to get the study staff to actually put the data properly and in a timely fashion into EDC. Only then can we talk about centralized statistical monitoring and meaningful remote analysis. From their analysis, I see this review of data input cycle times as a glaring omission.
Bracken’s observation primarily reflects habitual behavior of personnel at some investigative sites. During our large contraceptive study, we also noted a handful of sites that experienced behavioral or cultural problems. A clinical trial lead for the study summed it up as follows:
Some sites have become quick and careless They count on CRO monitors to catch their mistakes/ Training for RBM studies must emphasize that the PI is responsible for accuracy. Also, we ran into one site that was not letting us know about patient visits and that delayed our recognition that CRFs were not being entered. We must address these issues in training and then hold problem-sites accountable.
To be clear, Health Decisions has witnessed this disappointing type of site behavior but it is atypical. Another CRO, Target Health, reports experience similar to that of Health Decisions, stating that 95% of data is entered within 48 hours for many of their RBM studies. Nevertheless, in RBM studies, it is important to detect delays in data entry quickly and intervene rapidly, as delays may mask bigger problems in site performance. It is important to note that sites that neglect the duty in a trial involving human subjects to enter accurate data in a timely manner are the exception rather than the rule. We have primarily observed excellent performance on RBM studies by sites with clinical trial experience and adequate dedicated research staffing.
Cultural issues with the study team
Our study team for the contraceptive study under discussion did experience one cultural issue as reflected in the comment from the clinical trial lead-exasperation with some sites’ dislike for RBM because any reduction in site visits increases the burden on the site to do its work properly the first time. Our cultural adjustment is recognizing this trend as an unfortunate reality and adjusting site training and site management accordingly. Beyond that, our study teams have long been accustomed to a data-driven management style and are comfortable with the cultural requirements of RBM.
Other companies have reported cultural issues in the transition to RBM. A post by Moe Alsumidaie on the Applied Clinical Trials blog mentioned staff resistance at Bristol-Myers Squibb to changes in roles and responsibilities in the early days of their RBM pilot project.5 In our experience, RBM makes the work of CRAs more interesting, because it enables them to focus on data and processes most important to project success and because RBM enables CRAs to function more as site managers and less as data checkers. A key determinant of cultural adjustment to RBM may be whether appropriate process improvements and technology are in place. The study team that implemented RBM for the large contraceptive study found the experience interesting and rewarding.
Cost and quality results
While the emphasis in RBM should be on adjusting as necessary to ensure meeting quality goals, we did realize substantial cost savings on this contraceptive study while also maintaining high data quality. Over a 27-week period, we decreased SDV and increased data quality (decreased error rate). See figure below.
We realized 76% cost savings in the same study. Savings came overwhelmingly from reductions in on-site SDV of nonessential fields (see
figure at right).
However, such cost savings should not be considered a rule of thumb for RBM implementations. Costs will vary based on the capabilities of available RBM and trial-management systems, the experience of the study team, the health condition, the study population, and circumstances encountered in the field.
Given a robust implementation of RBM, sponsors should consider it a reliable way to ensure data quality in a manner that focuses on the quality of endpoint data and protects the statistical analysis defined by the protocol. If there is a risk in adoption of this novel approach to trial monitoring, it is that risk indicators based on data collected during the study will dictate aggressive monitoring to ensure meeting quality goals. Such a scenario would limit cost savings. However, based on our experience with RBM, we believe that in most cases, a rigorous implementation that includes proactive, metrics-driven site management will both ensure high data quality and deliver substantial cost savings.
Lisa James is Director of Corporate Operations at Health Decisions
1. EMA. Reflection Paper on Risk Based Quality Management in Clinical Trails. August 4, 2011.
2. FDA. Guidance for Industry: Oversight of Clinical Investigations-A Risk-Based Approach to Monitoring. Draft August 2011. August 2013 version available from: http://www.fda.gov/downloads/Drugs/.../Guidances/UCM269919.pdf
3. Food and Drug Administration. FDA Mandates New Warning for Nonoxynol 9 OTC Contraceptive Products. Press Release. December 18, 2007. Available from: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/2007/ucm109043.htm
4. Bracken N. Back on My RBM Soapbox. ClinOps Toolkit blog. July 19, 2014. Available from: http://clinopstoolkit.com/2014/07/back-rbm-soapbox.html
5. Alsumidaie M. How Bristol-Myers Squibb is doing Risk-Based Monitoring. Applied Clinical Trials blog. May 13, 2014. Available from: http://www.appliedclinicaltrialsonline.com/how-bristol-myers-squibb-doing-risk-based-monitoring