Properly Assessing Quality in Clinical Trials

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-02-01-2015
Volume 24
Issue 2

In early January, Novartis selected Qualcomm Life as a global digital health collaborator for its Trials of The Future program.

After a five-year review of clinical trial quality measurement research and practices, we have concluded that clinical trial quality measurement does not meet current scientific standards. This creates an uncomfortable situation in a $71 billion industry: trial sponsors and managers must depend on indirect indicators to know the quality of their clinical trials. This lack of scientific quality measurement contributes to industry's struggles to address problems like cost overruns and adherence to timelines. Effective clinical trial management and improvement can only happen if there are valid and reliable (i.e., scientific) quality metrics. More information on the research and findings can be found at: http://bit.ly/1CeggPz.

The challenge of measuring quality

One of the major obstacles to achieving scientific quality management is an understanding that clinical trials, as a service, are fundamentally different from measuring product quality. So while the quality of a manufactured pill can be assessed with the usual operational metrics, the quality of conducting a clinical trial requires a different approach to measurement. The differences between products and services that have the greatest impact on measuring service quality are intangibility and heterogeneity. First, services are intangible-they lack objective attributes that can be directly observed. In measuring clinical trials, then, we need to depend on evaluations from specific expert witnesses, meaning a move to purposive instead of random sampling. Second, services are heterogeneous. Each clinical trial is different from previous trials.

This creates a couple of problems: If every trial is different from all other trials, how do you create standards for performance and who gets to decide if they meet the standards? The customer-or in the case of clinical trials, the sponsor or patients-have the prerogative of evaluating the trial because they are the ones who are in a position to judge the value created and adjust expectations to the context. While there is an intuitive appeal to adopting manufacturing approaches to measuring quality and performance, clinical trials are services and not products. Since services and products are fundamentally different, applying manufacturing measurement to a clinical trial service is not appropriate.

Research focus

The purpose of the larger paper was to assess the overall quality and performance in clinical trials using scientific measurement methods. This research results from a collaboration between CRO Analytics, Drexel University, and Applied Clinical Trials. This research was conducted in two phases. Phase 1 was the establishment of a qualitative methodology based on input from industry experts on performance and quality drivers of trials. Our subjects broke down performance activities into four distinct stages: sales & contracting, study startup, conduct, and closeout. Phase 2 was a quantitative study in which we purposively sampled experienced industry executives in an online survey solicited through our contacts, industry association appeals, and outreach to Applied Clinical Trials subscribers. There were 300 respondents who evaluated the overall performance of an individual stage of a trial in which they had recently participated.

In presenting our data approach at conferences, we encounter people who prefer operational data because they are "more objective." As we have shown, attempts to use operational data as a service quality measure is fundamentally and logically flawed because they fail to account for the differences between manufactured goods and services. Secondly, objective data, especially operational data, typically lack validity as quality indicators. For example, the number of days it takes to recruit patients. "Days" is a measure of time-not quality. Whether or not 76 days to recruit patients is high performing depends on the individual trial, so it lacks validity as a quality measure. Thirdly, these assessments are evaluations, not opinions. Finally, the measures we describe here all meet the statistical standards for validity and reliability. We are not aware of any operational metrics that can meet these basic scientific standards.

Results

We received 300 responses assessing trials in the U.S., Europe, Russia, India, China, Japan, other Asia, and South/Central America. The average number of subjects was 1,068 per trial and the average number of sites was 97. Study results included:

  • The average quality and performance scores were lower that we expected. There was considerable score variation.

  • Quality varied by the phase of the trial, with performance being highest in Phase II and lowest in Phase IV.

  • Quality varied by the number of subjects and sites.

  • Performance was lowest in study startup, while conduct and closeout had the highest performance scores.

  • Performance also varied by the number of subjects and sites in the trial, but in a different pattern compared to quality.

The study results raise concern not only for the average quality of clinical trials but also the variation in quality. In order to begin to address these concerns, it is critical that we adopt scientific measurement approaches that have been adopted across most other service industries.

Recent Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.