FDA RBM Guidance is Still Misinterpreted

October 11, 2016
Moe Alsumidaie

As biopharma companies continue to explore and experience ways in which risk-based monitoring is implemented, the process of such can be misconstrued. Peter Schiemann elaborates on some of the current issues of RBM interpretation and implementation.

The topic of risk-based monitoring (RBM) continues to morph, as biopharmaceutical enterprises continue to explore and experience ways in which RBM is implemented. However, the thinking process on RBM implementation seems to be misconstrued by clinical operations personnel. Peter Schiemann, PhD, Managing Partner of Widler & Schiemann Ltd., elaborates on some of the current issues of RBM interpretation and implementation.   Moe Alsumidaie: Can you elaborate on how clinical operations personnel are missing the point behind FDA’s RBM guidance?   Peter Schiemann: If you look at FDA’s guidance on RBM and read in between the lines, you’ll discover that there is a bigger picture on how to manage study risk. I’ve found that clinical operations personnel tend to jump on the risk assessment and monitoring aspects without taking a step back and looking at the clinical trial as a whole, and asking how they can incorporate quality in study design and setup.   

I've asked many personnel about the rationale that influenced them to champion RBM initiatives internally, and some have said that they did it because regulators want it, whereas others indicated that it will save the company capital. The idea behind the risk-based approach is to run clinical trials in a more focused manner, which results in faster study completion timelines, and this starts with well-designed protocols and study setup as primary quality drivers, and not monitoring activities.   MA: How can clinical operations teams better plan and mitigate risks in clinical trials?   PS: There are four steps towards establishing proper risk management infrastructures in clinical trials. The [1] first step involves developing a well-designed protocol, which includes focusing on only collecting data that is required to answer the study’s endpoints (and not collecting arbitrary data). The [2] second part comprises of study set up, which includes proper vendor selection and feasibility; for example, there was an instance when I oversaw quality for a Phase III study, and the team wanted to use an academic research institution to examine biomarkers (they were the only institution that had a patent on this particular assessment). This raised a lot of red flags in terms of risk, as we didn’t know whether this patented assessment was validated, or whether the academic research institution was operationally capable of processing a large number of samples for our Phase III study. We discovered in our risk assessment that the research institution did not have the capacity to process the biomarkers, nor did they have proper technological infrastructures to efficiently convey the results. It is this kind of analysis that enables study teams to better understand the risks involved with study set up. The [3] third step includes leveraging real world data to generate an operating strategy; for example, selecting sites in areas where the targeted patient population exists, so that the site will enroll, conducting a thorough on-site feasibility, and ensuring that the site has set up workflows to execute study operations. The [4] last part consists of defining metrics (i.e., key risk and performance indicators), defining risks (i.e., using risk assessment categorization tools), and developing risk-based monitoring plans.   Clinical operations personnel are currently focusing on the last step of the risk management process, rather than looking at the holistic approach towards risk mitigation through proper planning.   MA: Do RBM technologies address the issues of quality management?   PS: RBM technologies are very good at visualizing key risk and performance indicators, and operationalizing monitoring activities. However, the metrics that are offered by these tools and systems are not thought through very well; some technologies measure risk only by standard deviation; but what does standard deviation really tell you about your study, other than variability in the analytic that you’re measuring? Personnel are using these tools and are measuring certain study metrics without thinking about what the data actually conveys, and how much of it they could actually interpret to make well-informed decisions.   While RBM service providers offer great technology, many do not necessarily have the expertise (nor are they responsible) for conducting RBM and defining study risks; that is the sponsor’s accountability. It would be helpful if RBM technologies did offer the analytical guidance that is needed in order to build the system around individual trials’ needs.   MA: Do existing key risk and performance indicators properly measure study risk?   PS: I think we are currently focusing on the wrong indicators; for example, many study teams focus on first patient in (FPI) as a risk indicator, whereas in reality, FPI does not mean anything; a more proper risk indicator should be last patient out (LPO). Another example, while many organizations measure time to data entry in EDC as a key risk indicator, the metric may incorrectly categorize site risk, as time to data entry does not necessarily cover the entire picture as to whether or not a site is engaged and needs more interaction by monitors. A more accurate metric of site engagement may include the frequency of EDC logins by the site, or activity in EDC (i.e., changing data) instead.    In addition, while key risk and performance indicator scalability is an important factor to measure risk in aggregate (i.e., in a molecular program with numerous studies), each study is unique, and they require specific analytics that need to be customized in order to mitigate study-specific risk.