How We Fall into Metrics Malpractice

Article

Applied Clinical Trials

Effective clinical trial management depends on accurate and unbiased performance measurement.

Effective clinical trial management depends on accurate and unbiased performance measurement. Performance measures are the lens through which managers monitor clinical trials and assess trial quality. Using the wrong metric1 or having biased measures distorts managers’ view and hinders effective trial oversight. It is important, then, that clinical trials managers make sure that they are using the right performance measures.

In previous articles (1-3), I have described and illustrated how performance measure should be developed. Now I want to focus on common ways that clinical trial performance measurement deviates from the path of good scientific practice. While I am not the first to report deficiencies in performance measurement in business (4) or healthcare (5), little attention has been devoted to the standards of performance measurement in the clinical trials industry. If performance measures are going to give an accurate view of what is happening in a clinical trial, they must be valid and reliable (i.e. scientific). This requires that the measures be developed and utilized according to good scientific practice. I describe deviations from good scientific practice as ‘malpractice.’

It is not necessary that clinical trials managers understand the complex statistical procedures that underlie performance measurement in order to assess whether they are scientific. Maintaining scientific integrity only requires that the development and usage of the measures be overseen by someone who understands the scientific process. Since clinical trials managers are deeply involved in the scientific processes in the context of clinical trials, it is a natural extension to apply these same scientific principles to performance measurement and utilization.

Here are four common types of malpractice in clinical trial performance measurement:

Meaningless Metrics. Many of the performance measures used in clinical trials today have not been validated. Validation is the statistical process used to assess the meaning of what is being measured. Even if a metric has a widely accepted common-sense meaning, it must still be validated to know that the meaning is coming through in the measurement. Since an unvalidated measure has an unknown meaning, it is a meaningless metric and a type of malpractice.

In developing performance measures, the first step is to solicit potential items (or the questions you will use to measure performance) from industry experts and experienced managers. Clinical trials performance measures typically do a great job on this step – managers and other experts are generous in donating their time to make suggestions and review items. But scientific practice also demands that these suggested items must be statistically validated. It is through this validation process that the validity and reliability of the metrics is established. Without this validation, managers do not know what they are measuring or the accuracy of the measurements.

One common mistake related to this validation issue is to confuse operational metrics with performance metrics. Operational metrics are process indicators typically used to assess progress on a discrete procedure. Operational metrics, when properly used, have an intrinsic validity: If you are counting the number of days it takes to recruit patients, the meaning of the measure is clear. But you can’t take the validity of this measure in the specific operational context and generalize it to measure trial performance. It might be related or may be unrelated – you don’t know until you go through the statistical validation. Because of this, it is metrics malpractice to use operational metrics to assess trial performance without validation.

Incomplete Measurement. When assessing clinical trial performance, it is important that you measure all of the performance activities that lead to a high-quality clinical trial. It is common to see clinical trials performance measures that only focus on one component of a clinical trial. The argument goes something like this: “Patient recruitment (for example) is an important part of clinical trial performance, so good patient recruiting means good clinical trial performance.” This argument is false and is an example of reductionism. In reality, clinical trial performance requires a complex integration of many different factors. Patient recruitment, for example, is dependent on other factors like protocol design and the marketing materials used in soliciting potential subjects and the investigator site. If you just look at isolated metrics, without incorporating all of the performance drivers, you have two statistical problems. First, the estimate of the weight you give that driver – not all performance drivers have an equal impact on performance – will be wrong unless you have included all of the drivers in the model. Second, your isolated metric may not even be a significant driver of performance (3). For these reasons, just looking at isolated measures is a type of metrics malpractice.

Measurement Without Predictive Models. It is common practice within the clinical trials industry to benchmark a measure to an average. This is a type of malpractice. In order to assign the proper weight to a performance driver, you must link all of the trial activities to outcomes in a predictive model. What outcome should we predict? There seems to be a consensus the quality of the clinical trial is the key construct (6,7). Only when you link the measures to an outcome in a predictive model will you understand the relationship between the performance activities and the outcome of a trial.

Benchmarking to averages can lead you into the metrics trap. Since an average is simply a mean, what happens when everyone in the sample is underperforming? Then it becomes possible to be above-average but low-performing – the metrics trap. The way to avoid the metrics trap is to use predictive models. As a result, benchmarking performance measures to averages is metrics malpractice.

Data Hoarding. OK, this is not an absolute deviation from good scientific practice, but it is an increasingly common trend that leads to malpractice, so I include it here. For some reason, analytically inclined people, myself included, tend to collect large datasets when we feel insecure about our analyses. “In a play-it-safe environment, there is a tendency to measure everything.” (8) This big data provides a sense of comfort, and perhaps vicarious pleasure, to assuage our insecurities. But having a large dataset does not leading closer to understanding clinical trial performance – it obscures our understanding. When metrics proliferate, we end up with “...measurement systems that deliver a blizzard of meaningless data that quantifies practically everything no matter how unimportant.” (9)

In scientific performance measurement, less is more. Our models should be parsimonious and our data collection unobtrusive. Over-measurement leads to confusion about what you should measure, inefficiency, and increased errors in handling a massive dataset. Parsimonious measurement leads to an efficient and clear management system that allows managers to focus on the key drivers of clinical trials performance.

Conclusion. Effective clinical trials management depends on scientific performance measures. Many of the performance measures used in the clinical trials today, however,  have not been developed using scientific principles. In this article, I have identified four of the most common ways that I see clinical trials performance measures deviate from good scientific practice. If we are going to effectively manage clinical trials and be able to demonstrate adequate clinical trial oversight, then it is critical that our measures be developed using the appropriate scientific practices.

 

Michael J Howley PA-C, PhD. is an Associate Clinical Professor at the LeBow College of Business, Drexel University [email protected] and the Chief Science Officer at CRO Analytics [email protected]

References

1 Howley MJ. The Next Generation of Clinical Trial Performance Measurement. Applied Clinical Trials 2014;forthcoming. Epub February 2014.

2 Howley MJ. Clinical Trials Performance Measures You Can Use (and believe). Outsourced Pharma [Internet]. 2014 April 2014. Available from: http://www.outsourcedpharma.com/doc/clinical-trial-performance-measures-you-can-use-and-believe-0001.

3 Howley MJ, Malamis P. High Performing Study Startups. Applied Clinical Trials 2014;forthcoming.

4 Spitzer DR. Transforming Performance Measurement. New York: American Management Association; 2007.

5 Porter ME, Lee TH. The Strategy That Will Fix Healthcare. Harvard Business Review 2013;91:50-74.

6 Bhatt A. Quality of clinical trials: A moving target2011 October 1, 2011. 124-8 p.

7 Toth-Allen JP. Building Quality Into Clinical Trials - An FDA Perspective. May 14, 2012.

8 Jaque E, Clement S. Executive Leadership: Blackwell; 1994.

9 Hammer M. The Agender: Crown Business; 2001.

 

*I use the terms ‘measures’ and ‘metrics’ interchangeably.

© 2024 MJH Life Sciences

All rights reserved.