OR WAIT 15 SECS
© 2021 MJH Life Sciences™ and Applied Clinical Trials Online. All rights reserved.
Applied Clinical Trials
Penelope Manasco, CEO of MANA UBM, disputes recommendations made by the original authors she finds to be problematic and worthy of further discussion.
I read with interest the above referenced paper (found here) published in Applied Clinical Trials on April 8, 2019.
The authors made several recommendations in the paper. I found some problematic and worthy of further discussion.
The goal of Risk Based Monitoring (RBM) is to identify and rapidly correct the “errors that matter”. The Risk Assessment process identifies the errors that, if they occurred, would affect the ability to interpret the study results (trial integrity or affect subject safety. These important findings are called the “errors that matter”(ETM).
The goal of the Risk Assessment is to minimize any risk possible and to determine how to identify or monitor whether the ETM occurred and whether they are systematic errors.
Multiple Sponsors, who have completed FDA and EMA audits within the past year, have told us the auditor’s expectation that the Sponsor have the capability to identify whether a critical error was systematic. Sponsors now have an important, new duty to recognize an error, to quickly ascertain whether the error is systematic, and correct any systematic error(s) as soon as possible. The authors rightly identified the importance of the Risk Identification and Categorization process.
Their recommendation to move directly to “Defining Key Risk Indicators” as the given clinical trial RBM oversight method poses problems. The authors provided no data to support its selection of KRIs as the only RBM method implemented.
Key Risk Indicators represent only one of many RBM methods used to identify whether the ETM occurred and whether the ETM were systematic errors. Table 1 illustrates the variety of RBM methods reported in a recent RBM survey.1
The challenge to clinical researchers is the paucity of direct comparisons of the effectiveness of different oversight methods. Several studies have compared aspects of SDV to determine its effectiveness as an oversight method2,3. No comparator studies of the effectiveness of KRIs in identifying ETM or systematic errors has been published to my knowledge. Manasco et al. published the first head to head comparison of the MANA Method of RBM to SDV, finding the MANA Method was superior in identifying ETM and systematic errors that had been missed by SDV.4
In that evaluation, the MANA RBM team also evaluated whether Key Risk Indicators would have identified the ETM within the same timeframe as the MANA Method.
The following KRIs were evaluated: deviation rate, major deviation rate, screen failure rate, and early termination rate. Rates were used instead of actual counts to correct for the enrollment numbers for each site. A Z score of 2 was defined as the trigger which identified that the site was 2 standard deviations above the mean. Normalizing the data is critical to defining the Key Risk Indicators.
We identified the following weaknesses with the KRIs:
While not evident in our comparison study, additional weaknesses in KRIs identified in additional evaluations conducted by MANA RBM are as follows:
A Plea for Systematic Evaluation of RBM Method Effectiveness
I have spent my entire career in clinical research and watched my colleagues in clinical medicine embrace the importance of evidence-based medicine. In fact, most of the trials we conduct are to provide evidence of the optimal therapies for patients.
As clinical researchers, we too, have an obligation to the patients that serve as our research participants. They give of themselves and undergo risk to participate in trials to advance medical understanding of their disease and treatments. It is our responsibility to conduct trials using the best methods to assure that if errors occur, we identify and correct them immediately. No research participants’ data should be thrown out because of a preventable error on the part of the clinical research community.
Just as treating physicians embraced an evidence-based approach to therapy, we too, must embrace an evidence-based approach to trial conduct and oversight. As clinical researchers, we must systematically evaluate different oversight methods to determine the strengths and weaknesses of each approach; not just accept opinions or methods that we have used for years. It is only when we know the effectiveness and limitations of different trial oversight methods that we can make informed decisions about the optimal way to evaluate clinical trial quality and integrity.
Until we understand what we need to measure and how, Sponsors, CROs, and Research Sites will waste enormous time and effort in extensive change management efforts and implementing RBM technology solutions that will not result in the desired outcome: better trial oversight and quality.
I encourage Sponsors, TransCelerate, and the NIH to provide test datasets anonymized to treatment so that effective trial oversight and quality methods can be tested, compared, published, and implemented.
Penelope Manasco is a CEO at MANA RBM.
References: