Risk-Based Monitoring Key to Protecting Data Integrity and Improving Patient Safety

Article

Applied Clinical Trials

How does one determine and define the "risk" in a risk-based monitoring approach for clinical trials?

How does one determine and define the “risk” in a risk-based monitoring approach for clinical trials?  Can it be as simple as knowing the likelihood of something bad happening?  For a clinical trial, “bad” could mean anything from corrupt or missing data, protocol deviations, or serious adverse events.  Historically, the monitoring of these elements has been time consuming, resource intensive, costly and not necessarily effective in protecting the two most important elements of the trial:  data integrity and patient safety. 

Although “risk-based monitoring” is called out in the guidance from the FDA (August 2013), the EMA (August 2011) and other regulatory authorities, it’s the industry – sponsors, contract research organizations (CROs), organizations like TransCelerate – that are working fervently to determine HOW to take a more risk-based approach to monitoring clinical trials. 

In a recent survey of sponsors and CROs by Applied Clinical Trials and SAS, 75% of the respondents indicated they would be reducing on-site static monitoring and increasing the amount of centralized monitoring in order to move to a more risk-based approach.  The survey also indicated some trepidation around the regulations and having a clear path or methodology forward in adopting a more centralized analytical approach.  After all, there are significant business process changes, regulations to follow, different needs for accessing data and potentially new analytical methods to employ. 

So where does one begin? 

Let’s start with the data.  A crucial component of making risk-based monitoring successful is to bring together essential data elements that include clinical data (i.e., patient response) and operational data (i.e., site performance) from both past studies and the incoming study data.  Leveraging the historical data along with current trial data enables the broadest perspective on overall risk.  

This information management process includes automating the collection and update-frequency of the data, reconciling data from various data sources such as labs and electronic case report forms, and preparing the data for analytics that will drive risk assessment.  Profiles can also be created for the easy viewing and exploration of risk factors by study, sites, clinical research associates, investigators, countries and more.  The data preparation component is not to be taken lightly, as this activity is critical to having quality data to effectively implement remote and central monitoring.

It can be a challenge to aggregate various data sources from systems that aren’t designed to work together, but as one attendee at the recent CBI Risk-Based Monitoring conference stated, “We’ve got plenty of smart people who know how to get the data together for us.”  Thank goodness for the competent data managers who can facilitate this important part of the process.

The data is ready – let’s analyze it.  Applying analytics to historical data provides insight into the expected level of risk for similar new studies.  Before the study begins, efforts can be put into place to improve past site performance or to employ additional training around the protocol in order to reduce potential adverse events reported with similar compounds.  It’s also possible to leverage those analytical insights from past studies, so the protocol itself can be improved as past known risks are considered during the new protocol development and review process.  Once the study begins, incoming study data is collected and aggregated across sites for central analysis and comparison. 

With baselines of expected performance and quality in place, risk factors are identified during the trial based on a two-prong approach.  First, predefined “risk rules” will capture outliers in the data or highlight events that indicate a need for review due to an acceptable threshold that has been breached.  The analysis will be based on metrics derived from the protocol, the risk management plan, knowledge of past studies or perhaps industry resources such as Transcelerate’s methodology position paper (May 2013). 

Second, predictive analytical modeling goes beyond finding the known factors discovered in the risk rules and provides the opportunity to uncover additional risks sooner such as patient safety issues. Even more importantly, by spotting trends (not just a threshold breach), those issues can be addressed earlier in the trial. 

The combined rules and predictive models provide risk scores (often High, Medium, Low) at designated levels such as site, subject, CRA, country or other designations.  Individual risks can be rolled up into overall risk assessments within a study or even across multiple studies to give trial operations staff more insight into the execution and performance of trials as a whole.  Based on the type of study, thresholds can be fixed or statistically generated with alerts and triggers being surfaced through reports and dashboards for all critical areas.  Fraudulent activity can also be uncovered through this assessment and scoring process, adding to the increased quality and integrity of the trial. 

The entire analytical process will be cyclical. Using a learning environment where the results of triggers and actions are captured for later analysis, suggested actions (phone call, additional training or actual on-site visit) can be incorporated into the reporting.  Over time, models will improve their predictive ability to warn of potential impeding risk, not just uncover alerts after the fact.  The real value of this “rules plus models” approach is the ability to improve the predictive capability of understanding risk across all trials as they are started and as they progress.

An analytical, risk-based approach to executing clinical studies promises great benefits related to increasing the awareness of patient safety and identifying issues that can impact the progress of a trial.  There should also be efficiencies and cost savings regarding the resources, particularly CRAs, as they are scheduled and assigned for on-site monitoring visits that will remain essential in the foreseeable future.  On-site monitoring visits aren’t likely to go away, but the purpose of the on-site visit will is shifting to those activities that cannot be accomplished remotely, such as training, calibrating instruments or validating data where there may be gaps in the electronic transmissions. 

The skill levels and areas of expertise will vary among a pool of CRAs.  Having an improved understanding of site risk, it makes sense to take into account ‘CRA risk’ when assigning and scheduling on-site visits.  Geography is an important factor, along with other scheduling challenges, but fortunately, simulation and optimization technology can take all necessary factors into account and provide more efficient plans for these high cost resources. 

Using inputs such as known site risks, complexity of the trial, past CRA performance and CRA skills (including skills in remote monitoring and expertise relating to the trial characteristics) the scheduling of CRA resources (CRAs) can be assigned and scheduled in a way that optimizes all factors.  All factors can be weighted and additional constraints, such as costs, can be incorporated into simulations that provide the most effective scenarios.   Throughout the trial, risk levels will change, and some conditions will alert that the monitoring schedule needs to be modified based on the changed or trending (predictive models) risk. 

Pulling these processes together, this approach enables the ability to recognize and act upon risks to improve trial quality and performance. 

  • Information Management – gain a single view of a trial; improve the quality of data; provide views across sites, investigators, countries, monitors and patients; combine incoming study data with historic data; prepare data for risk assessment and scoring.

  • Risk Identification and Reporting – set expectations for the trial using risk scores developed on historical data; scores are regularly updated based on the incoming study data (intervals may vary depending on study); risk assessments cross both operational areas and patient safety; triggers are defined by business rules and analytical models (with adjustable or statistically generated thresholds); alerts are surfaced through reports and dashboards and risk mitigation actions are captured and reported back through the workflow.

  • Resource Management – leverage historical data to determine risk of sites and the performance of monitors; develop initial site visit plan based on both site and monitor performance inputs; capture changing risk levels throughout the trial and use as inputs to revise monitoring (calls or visits) as needed to reduce correction time for at-risk sites. 

Risk-based monitoring represents just one opportunity for leveraging the combined data store of clinical and operational data and using analytics to improve the operations of clinical trials.  With historical clinical data and site performance data in place, predictive modeling and simulations can lead to significant improvements in other areas of trial operations such as protocol development, site selection and an improved understanding of risk in the broader compound portfolio.

The bottom line is that analytics are empowering when it comes to gaining efficiencies and improving patient safety in clinical trials.

Where to start?  Get some of those “smart people who know how to get the data together” and use every data source available that can provide insight into the conduct and performance of the trial.  Centralize, analyze, assess risk, inform others, and take action – swiftly.  Use that information and the results and make the process better with each and every clinical trial. 

Related Content
© 2024 MJH Life Sciences

All rights reserved.