The Risks of KRI in RBM

Article

Applied Clinical Trials

Penelope Manasco, CEO of MANA UBM, disputes recommendations made by the original authors she finds to be problematic and worthy of further discussion.

I read with interest the above referenced paper (found here) published in Applied Clinical Trials on April 8, 2019.

The authors made several recommendations in the paper. I found some problematic and worthy of further discussion.

The goal of Risk Based Monitoring (RBM) is to identify and rapidly correct the “errors that matter”. The Risk Assessment process identifies the errors that, if they occurred, would affect the ability to interpret the study results (trial integrity or affect subject safety. These important findings are called the “errors that matter”(ETM).

The goal of the Risk Assessment is to minimize any risk possible and to determine how to identify or monitor whether the ETM occurred and whether they are systematic errors.

Multiple Sponsors, who have completed FDA and EMA audits within the past year, have told us the auditor’s expectation that the Sponsor have the capability to identify whether a critical error was systematic. Sponsors now have an important, new duty to recognize an error, to quickly ascertain whether the error is systematic, and correct any systematic error(s) as soon as possible. The authors rightly identified the importance of the Risk Identification and Categorization process.

Their recommendation to move directly to “Defining Key Risk Indicators” as the given clinical trial RBM oversight method poses problems. The authors provided no data to support its selection of KRIs as the only RBM method implemented.

Key Risk Indicators represent only one of many RBM methods used to identify whether the ETM occurred and whether the ETM were systematic errors. Table 1 illustrates the variety of RBM methods reported in a recent RBM survey.1

Table 1. Different RBM methods available.

The challenge to clinical researchers is the paucity of direct comparisons of the effectiveness of different oversight methods. Several studies have compared aspects of SDV to determine its effectiveness as an oversight method2,3. No comparator studies of the effectiveness of KRIs in identifying ETM or systematic errors has been published to my knowledge. Manasco et al. published the first head to head comparison of the MANA Method of RBM to SDV, finding the MANA Method was superior in identifying ETM and systematic errors that had been missed by SDV.4

In that evaluation, the MANA RBM team also evaluated whether Key Risk Indicators would have identified the ETM within the same timeframe as the MANA Method.

The following KRIs were evaluated: deviation rate, major deviation rate, screen failure rate, and early termination rate. Rates were used instead of actual counts to correct for the enrollment numbers for each site. A Z score of 2 was defined as the trigger which identified that the site was 2 standard deviations above the mean. Normalizing the data is critical to defining the Key Risk Indicators.

We identified the following weaknesses with the KRIs:

  • KRIs were not sensitive and resulted in false negatives.

    One of the ETM identified by the MANA Method of RBM in our study comparing RBM to SDV4 was a systematic error in scheduling subjects at a research site resulting in all primary efficacy and safety assessments being out of window. Using a direct approach, the error was identified immediately. Using the indirect method of the KRIs, no signal was identified in either the deviation rate or the major deviation rate.

    Most KRIs require significant data to accrue before a KRI is increased to a higher risk level. This means that the event of concern may be repeated numerous times before the signal (risk level increase) is sufficient to identify the increased risk.

    In addition, low incidence events (a specific deviation that affects primary efficacy assessments) can be completely missed when they are “lumped” with a larger KRI of deviations, resulting in a critical false negative.
     

  • KRIs resulted in false positives.

    For instance, if a site has a small number of subjects and one subject with a large number of deviations, the site will be triggered as high risk incorrectly. There is significant time and resource effort in differentiating a false positive signal from a real signal.
     

  • KRIs do not evaluate data across databases, where errors often occur.

    In this study, the assessment of whether the primary efficacy lab sample being collected per protocol and received by the lab represented a critical ETM for the study. KRIs would not be able to identify the error by evaluating only the clinical database.

    In addition, many studies (e.g., CNS studies, dermatology studies) require the use of assessment tools administered by a trained investigator and that the same investigator complete all assessments for a subject. KRIs are not designed to specifically evaluate whether the primary assessment was conducted by the correct person in the correct role, that the same person performed all assessments for a subject, and that the person was appropriately trained. The data to conduct this assessment are usually in separate databases and may also include the audit trail. Standard KRIs will miss this critical protocol-specific process error. Missing this critical finding may result in censoring multiple subjects where the endpoint was not correctly completed and documented, potentially resulting in a failed trial.
     

  • Lack of ability to identify systematic errors.

    Even after a signal is identified, additional time and resources are needed to understand the actual error and to determine whether the error is systematic at one site or multiple sites. In addition, it is not clear or a given that the staff assigned to “figure out why a KRI converted to High Risk”, has the skills, training, or data to identify the error efficiently and whether the error was systemic within or across research sites.

While not evident in our comparison study, additional weaknesses in KRIs identified in additional evaluations conducted by MANA RBM are as follows:

  • Lack of sensitivity to interventions.

    For instance, once a KRI has become “high risk”, it is nearly impossible to identify whether an intervention was successful and to recognize new examples of errors that may exist within that KRI (e.g., deviation counts, rates)
     

  • KRIs vary depending on the underlying analytic and data structure and normalization of the data is critical.

    Measuring counts of an event does not take into account the number of subjects at that research site and the data are not comparable across research sites. If you review “normalized data”, you need to know whether the counts have been corrected for the number of subjects at the correct phase of the study. For instance, the subject correction factor for screen failures is different than that of early terminations.

    Standardizing the data based on the number of subjects may artificially increase the rate at sites with small numbers of subjects. Another approach, using Z scores (corrected for the distribution of the data across the sites) may provide a better understanding of true outliers, but the caveats above (and below) still hold.

    Finally, KRIs often assume that the data have an underlying normal distribution but without direct examination of this assumption (e.g., via scatter plots or distribution plots), overall conclusions may be incorrect and important discrepancies may be missed.
     

  • KRIs are dependent on the stage of each subject in the study. Often adverse event rates are used as a quality KRI but comparisons across sites are dependent on the phase of the study for each subject and site.

    If subjects from one site have already received study drug treatment, they should not be compared to another site whose subjects have not yet received treatment. A corollary to this would be comparing two sites, one whose subjects have been treated for nearly a year with a site whose subjects are newly treated. Rates of adverse events will likely be different based on timing alone.

A Plea for Systematic Evaluation of RBM Method Effectiveness

I have spent my entire career in clinical research and watched my colleagues in clinical medicine embrace the importance of evidence-based medicine. In fact, most of the trials we conduct are to provide evidence of the optimal therapies for patients.

As clinical researchers, we too, have an obligation to the patients that serve as our research participants. They give of themselves and undergo risk to participate in trials to advance medical understanding of their disease and treatments. It is our responsibility to conduct trials using the best methods to assure that if errors occur, we identify and correct them immediately. No research participants’ data should be thrown out because of a preventable error on the part of the clinical research community.

Just as treating physicians embraced an evidence-based approach to therapy, we too, must embrace an evidence-based approach to trial conduct and oversight. As clinical researchers, we must systematically evaluate different oversight methods to determine the strengths and weaknesses of each approach; not just accept opinions or methods that we have used for years. It is only when we know the effectiveness and limitations of different trial oversight methods that we can make informed decisions about the optimal way to evaluate clinical trial quality and integrity.

Until we understand what we need to measure and how, Sponsors, CROs, and Research Sites will waste enormous time and effort in extensive change management efforts and implementing RBM technology solutions that will not result in the desired outcome: better trial oversight and quality.

I encourage Sponsors, TransCelerate, and the NIH to provide test datasets anonymized to treatment so that effective trial oversight and quality methods can be tested, compared, published, and implemented.

 

Penelope Manasco is a CEO at MANA RBM.

References:

  1. Manasco, PK. RBM: Barriers to Adoption Applied Clinical Trials 11 OCT 2018
  2. Sheetz N, Wilson B, Benedict J, Huffman E, Lawton A, Travers M, Nadolny P, Young S, Given K, Florin L. Evaluating Source Data Verification as a Quality Control Measure in Clinical Trials. Therapeutic Innovation and Regulatory Science.48: 6: 671-°©680. November 2014
  3. Smith CT, Stocken DD, Dunn J, Cox T, Ghaneh P, Cunningham D, Neoptolemos JP. (2012) The Value of Source Data Verification in a Cancer Clinical Trial. PLoS ONE 7(12):e51623. Dol:10.1371/journal.pone.0051623.
  4. Manasco PK, Herbel E, Bennett S, Pallas M, Bedell L, Thompson D, Fielman K, Manasco G, Kimmel C, Lambeth E, Danzig, L. Comparing Risk-Based Monitoring and Remote Trial Management Versus Source Document Verification Applied Clinical Trials 28SEP 2018.
© 2024 MJH Life Sciences

All rights reserved.