OR WAIT null SECS
© 2023 MJH Life Sciences™ and Applied Clinical Trials Online. All rights reserved.
Applied Clinical Trials
CEO of MANA UBM, Penelope K. Manasco, explores the different approaches to determine what the right 'easy button' is to push to achieve effective clinical trial conduct and oversight.
I recently met with a company that told me “We want an easy approach to clinical trial conduct and oversight.”
That got me thinking-what exactly is the right easy button to push to achieve effective clinical trial conduct and oversight? Also, shouldn’t the definition of “easy” consider the complexities of the process and that the process be both “easy” relatively and, most importantly, effective?
To fully answer the question, we must first determine what we are trying to accomplish with our trial conduct/RBM approach? Is it to identify any and all errors that can affect trial integrity and your ability to achieve your primary endpoints and subject safety-and to determine whether they are systematic errors (i.e., errors that matter)? If so, then evaluating which approach is the easiest should also include the resources (time and cost) to actually identify the errors that matter, identify and correct systematic errors, and the cost if those errors were not identified.
There are many RBM approaches. Those that depend only on SDV (i.e., SDVing only critical fields, doing eCRF review only on “critical” fields) are not included in this discussion because numerous papers have shown (including MANA RBM seminal article – http://www.appliedclinicaltrialsonline.com/comparing-risk-based-monitoring-and-remote-trial-management-vs-sdv ) that SDV is not an adequate oversight method. That leaves three other approaches that are popularly used:
First though, let’s look at what the different systems are designed to do. KRIs and Statistical modeling are both non-specific surrogate markers for quality. They look for outlier sites based on the criteria selected by the CRO and the sponsor. Many of the complex aspects of the trial may not be evaluated (e.g., investigational product management) because the data reside in different data systems). It is extremely important to know what is being evaluated and what isn’t to determine whether the system is both easy and effective.
For both surrogate markers of quality (KRIs and Statistical Modeling), you do not get an exact answer that there is a specific problem, only an indication that something is different at those research sites. These approaches require additional resources to evaluate the data, understand why the research site is “an outlier”, understand what components go into that identification including the status of the site in the trial process. For instance, a site can be an outlier because it started the study at a different time, so its data may not look the same as another site.
In contrast, protocol-specific analytics identify the errors that matter directly (as identified during the risk assessment). This approach allows Sponsors, CROs, and research sites to know exactly when a critical error occurs and there is no additional effort to find out what the error is. Analytics can also identify systematic errors in near real time, allowing much faster root cause analysis and remediation with many fewer resources. Another strength of this approach is that it allows review across databases, which is not available using other RBM methods.
To evaluate the ease of use for each system, we must include the set-up time, resources to identify the actual “error that matters”, recognize whether it is a systematic error, and correct that error.
Initial set up:
KRIs (Non-specific RBM) are perhaps the easiest to set up. There are many programs and systems available.
Statistical RBM approaches (Non-specific RBM) are slightly more difficult. The Sponsor or CRO must send the data to a vendor, which performs the outlier analysis and provides the outcome to the Sponsor or CRO for follow up.
Protocol-specific analytic approaches take more time to set up because the data systems must be built before the analytics can be programmed. Protocol-specific analytics analyze data across different data systems (e.g., comparing clinical data, audit trail data, training data, and delegation of authority to confirm the right person has completed the primary endpoint assessments) and are designed to identify errors in critical aspects of trial conduct (primary efficacy data and processes, safety data and processes, human subject data and processes, investigational management data and processes, and trial integrity) based on a risk assessment of the protocol.
Identifying the errors that matter:
The Non-specific RBM methods (KRI’s and statistical RBM approaches) require a multi-step process to identify the errors that matter. First, a signal has to be generated, which means that a significant amount of data (i.e., subjects) are needed before a signal can be identified. After the signal occurs, the non-specific approaches require several steps. The Sponsor or CRO must spend time and resources to understand what the signal means and doesn’t mean-yes, false negatives take a lot of time, effort, and documentation. Depending on the knowledge and skills of the team evaluating signals, they may or may not be able to identify the error and the root cause. Following identification of an error that matters, additional resources will be needed to determine whether this error is a systematic error and the scope and root cause(s) of the error before any remediation can occur.
Another challenge with Non-specific RBM methods is that they may miss an error completely-this can occur when the error is a low incidence event (e.g., a deviation in collecting a primary endpoint assessment) and it is combined with many events (combined with all major deviations) so the signal is never recognized.
Non-specific RBM methods are not sensitive to data or process corrections. Once a site is found to be an outlier, follow-up after remediation is difficult because the volume of data that caused the trigger to be identified will still be present, so sites may still show as outliers even after a successful remediation.
Finally, Non-specific RBM methods are only as good as the monitors that manually identify the error. Expecting a manual process to identify errors that matter is fraught with challenges. If the monitors don’t complete their review, if they are unable to compare data across databases, if they simply don’t have time to check critical areas such as IP, they will probably miss the error. If errors are missed by manual checking, then the KRI’s our outlier analysis may not trigger at all.
In contrast, protocol-specific analytic approaches identify critical errors as they occur without waiting for someone to “find” them, and identifying systematic errors is fast, easy, and effective. They are also sensitive to oversight of remediation.
In summary, when all aspects of clinical trial oversight are considered, protocol -specific, analytic approaches are the clear winners for relative ease of use and effectiveness. This approach identifies the errors that matter, determines whether they are systematic, evaluates root cause analysis, and provides remediation using fewer resources than surrogate quality approaches such as Key Risk Indicators and Statistical modeling.
Penelope K. Manasco, MD, is the CEO of MANA RBM