Risk-Based Monitoring: Barriers to Adoption

December 1, 2018
Penelope Manasco, MD

Applied Clinical Trials

Applied Clinical Trials, Applied Clinical Trials-12-01-2018, Volume 27, Issue 12

Examining the barriers, challenges, and outcomes to determine the effectiveness of different RBM implementation approaches.

In April and May 2018, I developed and conducted a survey of pharma, biotech, and CRO staff to better understand the barriers to adopting risk-based monitoring (RBM). I performed these tasks as part of my role as an expert on an advisory panel to the FDA concerning RBM issues.

Fifty-one people responded to the survey. Participant organizations include CROs (21); sponsors (23); and other vendors (7). One respondent answered the survey twice. Only one of the responses was counted. One sponsor had two respondents. That sponsor was counted once for items such as whether RBM was adopted, but all responses for barriers to adoption were collected. The survey included representatives from large, mid-sized, and small pharma/biotechs, and vaccines and medical device companies. The CRO category included large, mid-sized, and small/niche CROs and contract monitors. The “other vendor” classification included consultants (i.e., clinical trial managers, data managers), an independent review board (IRB) representative, a representative from a site organization, and technology vendors.

CROs adopted RBM at a slightly higher rate (71%) than sponsors (64%). Of those that did not adopt RBM, most were small/niche CROs and small biotech companies, though one mid-sized pharma had not adopted RBM.

All but one of the CROs that had not adopted RBM and all of the sponsors that had not adopted RBM were small companies.

Different RBM definitions

Nearly every respondent defined RBM differently and proposed a different RBM implementation approach. The lack of a common RBM definition and understanding of what RBM does represented critical barriers to adopting the method. 

Many of the definitions and implementation approaches included reduced source data verification (SDV), SDV only of critical endpoints, and terms such as central monitoring of case report forms (CRFs), but most still depended on on-site monitoring. Other RBM definitions included using key risk indicators (KRIs) and other statistical methods to determine the need for and frequency of on-site visits. 

Some companies incorporated approaches that did not focus on SDV. They, instead, centered on identifying risks, developing study-specific reporting to conduct remote review, and developing risk mitigation approaches.

RBM implementation approaches could include more than one method. More companies adopted a hybrid approach rather than using strict RBM principles; however, the use of the different monitoring approaches was similar. Forty one of the responses included SDV or remote eCRF review. Other methods used included KRIs, statistical outliers, and protocol-specific reports of high-risk data. Of interest, more companies noted using KRIs rather than protocol-specific reports of high-risk data.

Barriers to RBM adoption

The two most common barriers to RBM adoption reported most frequently included the concern that it was too risky to eliminate SDV and the confusing definitions of RBM. Several barriers focused on the implementation challenges:

  • Too complicated

  • Don’t have the tools

  • Approaches, processes, and training implications of changing processes

Finally, the concern about audits after the trial was also reported by multiple respondents.

Another barrier mentioned throughout the survey involved the role of senior management. More specifically, these comments addressed the need to have senior management understand the RBM process, the RBM focus on quality, and the support needed to conduct a major change initiative.

Challenges to implementing RBM

In many cases, adopting RBM is a large organizational challenge. For those companies that have adopted RBM, the most common challenges to implementing RBM include (with number of respondents):

  • Having the technology tools work together is difficult (9)

  • Trainings must be designed for all team members (8)

  • Monitor resistance to RBM (7)

  • Difficult to understand the different RBM approaches (7)

  • Skills for monitors are different (6)

  • Skills for project managers are different (6)

  • SOPs are not written for new approach (6)

  • Senior management does not understand the process differences (6)

  • Difficult to choose which approach is best for our organization (6)

  • Technology tools are confusing, difficult to know which are needed (6)

  • Operational metrics are different (5)


Table 1 illustrates the outcomes reported from implementing RBM. This table also includes a determination of whether the outcome was positive or negative. “Higher costs” was an option listed, but no respondent reported it as a positive or negative outcome. 

There were more positive outcomes than negative outcomes. The highest reported positive outcomes were for “cleaner data” and “issues identified and corrected faster.” “Key risk indicators helped to focus monitors on specific areas” was also reported frequently, although one respondent reported that KRIs were a disappointment.

The most common negative outcomes were related to monitoring issues. Site adoption was split between negative and positive outcomes. This likely reflects the different approaches to implement RBM. In our experience, the site responses to implementing RBM have been overwhelmingly positive.

What could regulators do to help implement RBM?

We asked the respondents what regulators could do to support RBM implementation. Their results focused on the following areas:

  • More examples of how to implement RBM successfully, including lessons learned, best practices, training, Q&As, and white papers.

  • Audit advice-what will be audited; uniform interpretation and advice.

Some respondents wanted a set of approved RBM tools and processes that could be adopted across CROs. While recommended by respondents, this does not align with regulators’ remit or standard practices.

What could industry do to help implement RBM?

From an industry adoption perspective, several respondents highlighted the importance of change management and executive involvement in adopting RBM. While not mentioned in the survey, several executives at small companies privately commented that investors had requested their organization keep their oversight methods “as they have always done it” and not adopt RBM; arguably to reduce costs. Other executives privately noted that some CROs have discouraged them from adopting RBM, because it means less monitoring income for the CRO.

Publishing examples of best practices in implementing RBM and developing a dialog with regulators to discuss real-world examples were two examples of how the industry could enhance and advance adopting RBM and the processes-moving it from theory to adoption. In addition, respondent thought groups, such as CITI and TransCelerate BioPharma, could facilitate this dialog with regulators. TransCelerate has generously provided its thoughts on adopting RBM, but much of its input focuses on large organizations and may be less applicable to small companies.

Additionally, sharing examples of systems and approaches that have worked was considered an important tool to support RBM adoption.


The variability in RBM definitions and RBM implementation approaches was surprising and listed as one of the two most common barriers to adopting RBM. Since each RBM approach will have varying ability to detect “errors that matter,” this complicates the discussion of implementation and outcomes from RBM.

For instance, if one company merely decreases the amount of SDV to “critical data fields,” they will have a much lower ability to detect trends that can affect study outcomes and subject safety when compared to an implementation approach that includes trend analysis and specific analytic tools to detect “errors that matter.”

There was overwhelming interest in publications that provided lessons learned and best practices of RBM implementation. Defining specific RBM methods and testing their effectiveness is critical to determine which method identifies which errors, and which method provides the best outcomes. This clear, unmet medical need affects more than 100,000 study participants in pivotal clinical trials per year; approximately 30% were from the U.S. in 2015.1

The second-most common barrier to adoption was: “Too risky to eliminate, SDV are standard processes that worked well.” For years, the pharma/biotech/device/and vaccines industries have used SDV, a method in place for nearly 30 years without being tested for effectiveness. The RBM and ICHE6(R2) changes in GCP guidance by regulators2,3,4,5 indicate they think the current methods have not worked well. In addition, the more than 33 complete response letters from January 2017 to May 2018 also support the fact that previous successes from unproven monitoring methods do not indicate continued success.6

Applied Clinical Trials recently published the first head-to-head comparison between traditional SDV and one method of RBM (i.e., the MANA Method), showing superiority of the RBM approach.7 We hope this first article to address the subject will be one of many publications that uses data to evaluate the effectiveness of different oversight methods.

Two interesting implementation findings that affect effectiveness of RBM implementation were identified by the survey and through personal communications.

  • There is a disconnect between RBM adoption and adoption of technologies that facilitate the rapid, remote oversight envisioned in the RBM and eSource guidance. The findings of the technology adoption will be presented in a future, separate publication, but many companies have not adopted the tools that allow for remote review of data and documents.

  • RBM has been implemented only in monitoring groups within many organizations, without including the rest of the review team. Monitors conduct oversight using “RBM” methods while data managers use traditional data cleaning methods. This can precipitate significant organizational and data flow challenges, not to mention potentially higher costs if data managers query every data point, rather than focus on the errors that matter.

There is a clear need for open and regular dialog concerning these findings, between regulators (including the auditing portion of the organization), sponsor companies, and CROs. If we look at other complex implementation of new science and technology for guidance, the Pharmacogenetics Working Group, developed in the late 1990s with regulators across the world and pharma and biotech companies, could be an effective model. A second example would be the dialog between regulators and sponsors around the early development of HIV therapies. In both instances, the organizations worked on policies, interpretation of new methods, and how best to submit and interpret data.

Another resource that would advance the development of optimally effective risk-based oversight methods would involve the creation of anonymized reference study datasets provided by the FDA or NIH. This would allow everyone to evaluate the effectiveness of their different oversight methods to detect errors that matter versus the reference study data sets. By having common reference “studies,” the industry could develop and compare different oversight approaches’ ability to detect errors that matter, making all studies safer for study participants.

Finally, supporting the research and publication of data-driven comparisons of different oversight methods will help the industry adopt proven oversight methods.


This 2018 survey findings showed that approximately two thirds of sponsor respondents and three quarters of CROs have adopted some type of RBM. Small pharma/biotech and small CROs were the main organizations that have not adopted RBM in the five years since the 2013 FDA and European Medicines Agency (EMA) releases of recommendations for adopting a risk-based approach to monitoring. 2,3

A wide variety of implementation approaches have been adopted, with varying ability to detect “errors that matter.” A significant, unmet medical need exists to test and publish data to determine the effectiveness of the different RBM implementation approaches. This would help the industry make informed decisions about how to best protect study participants’ safety and obtain scientifically valid data to support the delivery of new medicines for patients.


  1. Global Participation in Clinical Trial Report 2015-2016. FDA/DHHS, July 2017. https://www.fda.gov/downloads/Drugs/InformationOnDrugs/UCM570195.pdf
  2. Guidance for Industry: Oversight of Clinical Investigations-Risk-Based Approach to Monitoring. FDA/HHS August 2013. https://www.fda.gov/downloads/Drugs/Guidances/UCM269919.pdf
  3. Reflection paper on risk-based quality management in clinical trials. European Medicines Agency, November 2013. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/11/WC500155491.pdf
  4. ICHE6(R2) Revision 2-adopted guidance EMA/CHMP/ICH/135/1995. June 14, 2017.  http://www.ema.europa.eu/ema/index.jsp?curl=pages/regulation/general/general_content_001251.jsp&mid=
  5. E6(R2) Good Clinical Practice: Integrated Addendum to ICHE6(R2): Guidance for Industry. FDA/HHS March 2018. https://www.fda.gov/downloads/Drugs/Guidances/UCM464506.pdf
  6. Complete Response Letters (CRLs): Big Trouble for Small Pharma. June 2018. https://camargopharma.com/2018/06/complete-response-letters-crls/
  7. Manasco et al. A Comparison of Risk-Based Monitoring and Remote Trial Management versus SDV in a Phase IV Pediatric Vaccine Trial. Applied Clinical Trials online, Sept. 28, 2018. http://www.appliedclinicaltrialsonline.com/comparing-risk-based-monitoring-and-remote-trial-management-vs-sdv?pageID=1


Penelope K. Manasco, MD, is CEO, MANA RBM

download issueDownload Issue : Applied Clinical Trials-12-01-2018

Related Content:

Online Extras