MCC Metric of the Month Blog: A Second Risk-Based Monitoring Metric

Jul 21, 2014

MCC has just published an executive summary of its Risk Based Monitoring Usage Survey. We’ve also just released our Metrics Search Engine that allows you to find the right metrics for your situation and level of expertise. Check out our web site for details. In April, we looked at eCRF data entry cycle time as a useful RBM metric. This month, let’s look at another metric that’s very important in a Risk-Based Monitoring (RBM) environment: Audit Findings per Site.

Why this metric is important:  Audit findings are a good way to identify sites that may be experiencing quality problems – or may have in the past – and thus should be monitored more carefully than other sites. Hence its value as a RBM metric. Audit findings can also be used to identify problems in the protocol, eCRF or site training that should be remedied – either for this protocol or for future protocols.

Definition:  The metric can be calculated in two ways:

  • To identify problems across a study protocol, we can simply look at the average number of major and critical findings across all sites audited.
  • To identify problems at a particular site, we can compare the number of major and critical findings for a given site to the average number of major and critical findings for all the other sites that have been audited.

How to calculate this metric: 

  • First, calculate an Audit Score for each audit:
    • If an audit yields 0 major/critical findings, its score is 0.
    • If an audit yields 1-2 major/critical findings, its score is 1.
    • If an audit yields >2 major/critical findings, its score is 4.
  • To identify problems across a study protocol, simply sum all of the audit scores and divide by the number of sites audited. This yields the Average Audit Score for the study protocol.
  • To identify problems at a particular site, simply divide the Audit Score for a given site by the Average Audit Score. This is the Relative Site Score.

<2 major/critical findings per site (average) is a reasonable target for this metric.

Example:  In the table below, we’ve listed five sites that were audited.  Sites 1 & 2 each had no major/critical audit findings, so each received a Site Audit Score of 0. Site 3 had 2 major/critical audit findings, so received a Site Audit Score of 1. Sites 4 & 5 had 3 and 6 major/critical audit findings respectively, so each received a Site Audit Score of 4.

The Average Audit Score for the 5 sites was calculated as 1.8 [(0+0+1+4+4)/5].

Finally, the Relative Site Audit Scores were calculated by dividing each Site Audit Score by the Average Audit Score.  Sites 4 & 5 have Relative Site Audit Scores above 1.0, so should be looked at more closely.

What you need in order to measure this:  All you need to calculate this metric is the number of major/critical findings for each site audited.

What makes performance on this metric hard to achieve:  Performance can be hard to achieve because the monitors must determine the root causes of the audit findings and remedy them.  If you don’t remedy the root causes, then the problems that led to the audit findings will likely recur.

Things that you can do to improve performance:  Once you are tracking this metric, if problems are occurring at individual sites:

  • Monitors should discuss with the sites the potential causes of the audit findings. 
  • Root cause analysis tools (5 Whys or Failure Mode & Effects Analyses) should be used to identify root causes.
  • Corrective actions should be put in place and monitored for implementation to prevent recurrence of the problems.

If problems are occurring across the study protocol (i.e., the Average Audit Score is high), then the same steps as above should be applied at the study level (rather than the site level).

Companion metrics:  Other metrics that you should consider in tandem with this metric include: (1) the MCC Site Quality metric and it’s related tracking tool, (2) site query response times and (3) other RBM metrics such as those developed by TransCelerate BioPharma. 

Dave Zuckerman, CEO, Metrics Champion Consortium, [email protected]

Linda Sullivan, COO, Metrics Champion Consortium, [email protected]
lorem ipsum