Risking it All? Going All in on RBM Adoption


Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-02-01-2015
Volume 24
Issue 2

Cancer Research UK (CRUK) followed such advice when it decided to have its Center for Drug Development (CDD) adopt a risk-based monitoring (RBM) approach across its entire portfolio of clinical trials.

"Only those who risk going too far can possibly find out how far they can go." – T.S. Elliot

Cancer Research UK (CRUK) followed such advice when it decided to have its Center for Drug Development (CDD) adopt a risk-based monitoring (RBM) approach across its entire portfolio of clinical trials. This decision has revealed how risk-adjusted approaches can bring greater than 20% efficiency savings in the monitoring of early phase oncology trials, which were previously believed to be unsuitable for RBM.

CRUK funds half of all cancer research within the UK and provides research into 200 types of cancer across all age groups.1 Within CRUK, the CDD features a Phase I portfolio and has completed more than 140 clinical trials, leading to five new medicines on the market.2 The new CRUK research strategy is to accelerate progress and see three-quarters of cancer patients surviving the disease within the next 20 years.3 It's, therefore, vital that the CDD remains innovative, pioneering new treatments to beat cancer sooner.

Embracing RBM

Though the U.S. Food and Drug Administration (FDA) released guidance on the expanded use of RBM in 2011,4 there was still reluctance on the part of sponsors to take on a full-scale risk-based approach. Many were concerned that failing to conduct 100% source data verification (SDV) could lead to something being missed and the validity of trial data being compromised. In general, sponsors had misconceptions about the potential benefits of identifying, targeting and reducing risk.

This attitude, however, is starting to subside and more sponsors are beginning to explore and adopt different methods of RBM, including the CDD at CRUK. All of the trials CDD conducts are early phase oncology, which are inherently high risk; but the decision was made to embrace RBM to gain real benefit from the process. On-site monitoring can typically account for 25% to 30% of the overall cost of a clinical trial, so by setting an aim of reducing the frequency of our monitoring visits by 20%, we calculated a 6% savings on the cost of running a trial. From a business perspective, we calculated we could potentially open up another six to 10 sites, which would optimize our recruitment strategy. In reality, those sites that were classed as high risk would continue to have a high monitoring visit frequency, for example, every four to six weeks; however, those that were classed as low risk would be able to have a significant reduction in the monitoring visit frequency. By adopting RBM, we felt it would be a more efficient use of resources by allocating them to where they are needed most.

Moving to RBM

For many years, like most in the industry, CRUK adopted the same approach to monitoring; a one-size-fits-all approach driven by standard operating procedures (SOPs) with 100% SDV and conducting monitoring visits every four to six weeks. This meant a huge burden for the clinical research associate (CRA) to verify all source data against the CRF, as well as attending regular monitoring visits to those sites with very little or no activity.

However, based on the guidance provided by FDA, the Medicines and Healthcare Products Regulatory Agency (MHRA)5, the European Medicines Agency (EMA)6 and TransCelerate BioPharma7, we came up with our own approach to RBM that was piloted and successfully rolled out across the CDD in January 2014. The main aim was to reduce the number of monitoring visits by 20% across the portfolio as a whole, which was successfully achieved. This article discusses the steps put in place to make RBM "business as usual" within the CDD.

Our RBM process (Figure 1) is separated into three key themes-risk assessment, data surveillance, and dynamic monitoring.

Risk assessment

All projects and sites are now assessed for their specific requirements (for example, the visit schedule for patients during the study) and individual level of risk. The monitoring visit frequency and targeted SDV (tSDV) plan is then set based on the risk level identified within the study-specific monitoring guidelines. We no longer consider it appropriate to apply the same frequency of monitoring visits for each study.

A risk assessment (and corresponding score) is performed at both the project level and at each individual site level prior to the initiation of a study, and is then reviewed and updated on an ongoing basis (at least every six months) throughout the lifecycle of the study. We created a risk assessment tool within Microsoft Excel that is able to capture various risk criteria such as protocol deviations, data quality, AE/SAE reporting, etc. that are defined as objectively as possible. A score of 1 (low risk), 2 (medium risk), or 4 (high risk) is assigned for each criterion and a total score is established. Table 1 (see page 38) shows the criteria for two examples; data entry/query resolution and protocol deviations:

The inflated score of 4 for high-risk criteria is something we amended after the pilot to make sure that if a criterion was high risk, it significantly influenced the overall risk score and level assigned to the site. Each site may be viewed as low, medium, or high risk depending on the overall score received (a total of ≤ 23 is a low risk site; a total of 24-30 is a medium risk site; a total of ≥ 31 is a high risk site). If any one criterion is assessed as being high risk, then the overall risk cannot be low. It is important that the clinical study team revisit the risk assessment on a regular basis throughout the trial to ensure that the risk score and resulting monitoring approach is adapted to the changing quality of site performance.


All decisions, justifications, and mitigation steps surrounding the score are documented on a risk timeline. We take the stance that sponsors should consider the regulatory authorities as the "client" of their RBM approach, and, therefore, the risk timeline document is vital as it acts as the audit trail and allows inspectors to look back at any historical scores, and piece together the decisions taken and justification provided by the clinical study teams throughout the trial. The risk timeline document has been completed for a number of studies where the scores have been adjusted (or not) based on various criteria and study-specific justifications. Table 2 demonstrates the risk score for a particular study (single center) assessed in July 2013, January 2014, July 2014, and September 2014.

From the example provided in Table 2 (see page 40), it can be seen that the score has been assessed at six-month intervals and then more frequently due to new information which triggered another risk assessment in September. The study team documented its concerns, justifications, and mitigation plans in order to support the score assigned, resulting in it becoming a high risk. This is a good example of allowing an auditor to see the logical steps and decisions made at each assessment. We want to avoid being "ruled" by the metrics, so the risk score is only used as a guide and ultimately it is at the discretion of the study team as to what risk is associated with a study or site (as per the July 2014 entry in Table 2).

The risk score is also reviewed by quality assurance (QA) to determine the audit program for the year ahead. One uexpected benefit of implementing RBM at CRUK is greater collaboration between our clinical operations and QA teams.

Data surveillance

Ultimately, we have increased the interaction between our clinical data managers (CDM) and site staff, as we found that the CDMs were an underutilized resource when it came to RBM and that their skill sets are perfect for the central monitoring role required. The CDMs contact sites on a regular basis (and vice versa), whether it is to notify them of outstanding data and queries, or whether sites need help with entering data in study-specific forms or need technical advice.

Delayed data entry is also something many experience in the industry. Despite having electronic data capture (EDC) and the ability to access data in real time, this is rarely the case and can lead to a data entry backlog. CRAs are then unable to make the best use of their time at the site if data has not been entered when they attend a monitoring visit. We, therefore, came up with a tool to help reduce this additional burden to the CRA: the data entry schedule (DES). This helps facilitate prompt data entry of key study data to allow the CDM to review and clean the data in a timely manner. The DES does not supersede any contractual obligation for data entry, instead it complements it. The tool is created and overseen by the CDM, with input from the study team to agree on suitable and realistic timeframes for data entry. It is then agreed with the site, ideally at the site initiation visit (SIV), so they are aware of the data entry expectations and have an opportunity to discuss any concerns. The CDM monitors this throughout the study and contacts the site when the timelines have been missed.

Since introducing the data surveillance procedures, CRUK has received some positive feedback from site staff via a survey (e.g., 75% of site staff responded that they found the DES useful). Many of them now contact the CDM directly when having issues with data entry or query resolution; some inform the CDM when they are going to be out of the office, thus impacting the DES; some even request calendar invites to be sent so they act as reminders for data entry based on the DES. This is all very encouraging and reiterates the fact that the CDM can play a more central role in the RBM process.

We now put greater emphasis on CRAs remotely monitoring trial data, taking advantage of the fact that there is EDC and data readily available. CRAs perform remote monitoring in line with the study-specific monitoring guidelines and may raise queries or contact site staff regarding any issues. This allows the CRA and site staff to focus their time on other activities and plan future goals.


Dynamic monitoring

The frequency of monitoring visits and level of tSDV are determined for an individual site based on the associated risk score and category (low, medium, and high) assigned using our risk assessment tool. This is documented in the study monitoring guidelines, which also defines critical and non-critical data. The level of SDV performed on critical data is 100% for all patients, whereas the level of SDV performed on non-critical data is variable depending on the risk score assigned. The CRAs also utilize the freeze function on the EDC database as a means of tracking the status of SDV, which has been found to be very useful.

During the pilot phase, we sought input and feedback on our processes and tools from the MHRA and EMA regulatory agencies. Both provided valuable feedback in order for us to make any adjustments to the existing process. A common discussion among delegates at RBM conferences is how little guidance and support the regulatory agencies provide on this evolving field. This is not something that we found, and we were surprised when the EMA told us we were the first sponsor that had sent them a RBM methodology to review.

To support the use of RBM, we also had a number of study and system audits conducted on trials where RBM was piloted. There were no critical or major findings that implied a reduction in data quality, patient safety, or trial integrity, which provided evidence that we had implemented the RBM process correctly. As our first attempt into RBM, we acknowledge it is a modest step. In due course, a larger leap, such as reduced SDV of critical data as well as non-critical data, could be taken. We decided against taking too big of an initial jump because of the risk of impacting patient safety, for example, only detecting an ineligible patient via SDV after they have completed their treatment and their data has incorrectly been used as evidence for dose escalation. We have also provided clear communication pathways with sites to reiterate their key roles and responsibilities remain as per International Conference on Harmonization (ICH) good clinical practice (GCP); site staff are accountable for the accuracy and completeness of the data entered into the eCRF. This is especially important in relation to targeted SDV, as the CRA will not necessarily double check that all data has been entered correctly. We have emphasized the importance of prompt data entry to allow our medical advisors and pharmacovigilance department to review "live data" in the eCRF throughout the study, as well as reinforcing the fact that clear communication is needed for any potential issues that may arise between monitoring visits.

How the RBM process is working?

We have identified several performance-related measures of success in order to help establish whether our RBM approach is working. Some of these are obtained from our EDC database, others are feedback in the form of questionnaires and general adoption of the process. At the beginning of the pilot, we established some baseline measurements and were then able to re-measure them post-pilot. Overall, we found:

  • The average time taken for sites to resolve data queries (based on eight studies) prior to the pilot was nine days. However, post-pilot, this reduced to seven days. This reduction supports the fact that using the CDMs to contact the site directly regarding the queries helps reduce response time.

  • A significant increase in the number of occurrences where site staff contacted CDMs directly in relation to queries, data entry, and general database issues post-pilot, with a baseline measurement of zero prior to the pilot.

  • CRA productivity during monitoring visits increased by 55%. Prior to the pilot (100% SDV and monitoring frequency of every four to six weeks), the average number of eforms SDV'd were 61 per day. However, post-pilot (tSDV and adapted monitoring frequency dependent on risk score), the average number was 94 eforms per day.

  • From questionnaire feedback, 75% of site staff found the data entry schedule useful, 80% found the direct contact with data management beneficial and experienced site staff noticed that CRAs had more time at site to support them with other tasks.

The measures of success will still be monitored and reassessed in the months ahead to continually evaluate the benefits of RBM. RBM takes a large proportion of people out of their comfort zone, as it is different from what we were accustomed to historically. Therefore, in order to promote the work and benefits of using RBM, our pilot, processes, and results were continuously presented to the rest of CDD. Naturally, there were late adopters within study teams who were skeptical of using risk to determine monitoring visit frequency as well as conducting tSDV, and so this was a challenge in itself. We decided to pilot RBM on studies where a CSM was involved in the RBM working group to demonstrate how to conduct the risk assessment, tSDV, the use of the DES, etc. We also included CRAs that embraced change, which helped to provide confidence in the new process. During the pilot and afterward, a clear communication path was maintained with everyone in CDD, which helped with any disruption to normal practices. However, as RBM is now business as usual, an associated policy document has been created as well as a corresponding guidance document. The policy helps cement RBM working practices at the CDD.

We have also conducted a number of external presentations and case studies to various organizations highlighting the fact that RBM can be applied to early phase studies and that expensive software is not a prerequisite in order to conduct RBM. So far we have received positive feedback on the work we have carried out and are happy to continue to share our processes and outcomes.

To those that have not considered adopting RBM or are still uncertain, give it a try. After all, "Only those who risk going too far can possibly find out how far they can go."

Sherraine Hurdis Senior Clinical Data Manager at the Center for Drug Development, Cancer Research UK; Stephen Nabarrois Head of Clinical Operations and Data Management Drug Development Office Strategy and Research Funding, Cancer Research UK.


1. Cancer Research UK, http://www.cancerresearchuk.org

2. Cancer Research UK, Center for Drug Development Impact report, http://www.cancerresearchuk.org/funding-for-researchers/drug-discovery-and-development/how-we-develop-new-treatments [4]

3. Cancer Research UK,


4. Food and Drug Administration, Oversight of Clinical Investigations -A Risk-Based Approach to Monitoring (August 2011 and August 2013), http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm269919.pdf

5. Medicines and Healthcare Products Regulatory Agency, Risk-adapted Approaches to the Management of Clinical Trials of Investigational Medicinal Products (Oct. 10, 2011), http://www.mhra.gov.uk/home/groups/l-ctu/documents/websiteresources/con111784.pdf

6. European Medicines Agency, Reflection paper on risk based quality management in clinical trials (Aug. 4, 2011), http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/11/WC500155491.pdf

7. TransCelerate, Position Paper: Risk-Based Monitoring Methodology (2013), http://www.transceleratebiopharmainc.com/wp-content/uploads/2013/09/Risk-Based-Monitoring-Methodology-Position-Paper.pdf

Related Content
© 2024 MJH Life Sciences

All rights reserved.