Quality Drivers In Clinical Trial Conduct

Article

Applied Clinical Trials

Adopting scientific quality measurement that recognizes that clinical trials are a service will reap benefits.

The clinical trials industry has demonstrated a commitment to improving the quality of clinical trials. Over the past decade, a variety of quality initiatives have emerged and billions of dollars have been invested to improve the quality and performance of clinical trials. New quality management techniques like risk-based monitoring and Quality by Design (QbD) have also become popular. Despite these efforts and investments, however, there has been little impact on the quality of clinical trials.

We believe that these quality efforts and investments have had limited impact because we have been using the wrong quality measures. It is common practice, for example, to use manufacturing quality indicators like the number of days to recruit or the number of queries during the trial. But these don’t measure quality – one is a measure of time and the other is a defect rate. These indicators are performance drivers that may be related to quality. Quality is a different construct that must be measured separately. And since a clinical trial is a type of professional service, it is important to measure the service quality of the clinical trial. Once the service quality has been established, the other performance metrics can then be related to clinical trial service quality.

While measurement may seem like a mundane detail, it is a critically important issue. First of all, regulators “…require sponsors to monitor the conduct and progress of their trials..." that should continue “… on a continuous basis throughout the design, conduct, evaluation, and reporting of the trial.” If you are not using the right measures in your oversight, then you can’t properly and efficiently monitor your trial. Perhaps more importantly, failure to detect impending quality failures delay trials and are costly. Unless the industry adopts valid and reliable measures of quality and critical performance drivers, then they will not be able to improve the quality of clinical trials.

The purpose of this research is to assess the quality within the conduct of clinical trials and then identify the key drivers of performance within study conduct and relate them to quality. In doing so, we identify the performance metrics that have the greatest impact on clinical trial quality.

Methods

We focus on quality within the conduct of the trial (as opposed to study startup or closeout). To measure the service quality of the conduct of the trial, we used a single-item global indicator. Since it is common practice within the industry to use operational metrics as proxy indicators for quality, we had to adopt an exploratory posture to identify the performance metrics we should assess. First, we sought to create a comprehensive list of performance drivers. We interviewed a series of clinical trials managers over the course of 18 months and asked them to list all of the performance activities that impact the quality of trial conduct. Both sponsors and CROs were included in our sampling. We continued to interview managers until we were no longer identifying new quality drivers. In all, thirty-six interviews were conducted. Respondents identified seven general areas that were important to conduct quality, including: adaptability, adherence, enrollment, functions, monitoring, project manager, and site relations.

Second, we then performed a quantitative analysis to distill all of these measures down to the essential indicators and calculate the magnitude of the relationship of each indicator to conduct quality. To do this, the items were loaded into an online survey tool. All of the items were edited for clarity and grouping. Sponsor and CRO trial managers were solicited in a purposive sampling to complete the survey, excluding respondents from the qualitative phase. Applied Clinical Trials collaborated in the data collection.

 

We received 82 usable surveys with 37% from Phase II trials, 46% from Phase III, and 17% from Phase IV. The average number of sites was 74 with an average 558 subjects.  

We used the statistical program SmartPLS 3 to identify the essential indicators of conduct quality. Missing data was less than 5% of all variables except for data management, protocol amendments, and project management with less than 10% missing data. For these cases, the missing data points were imputed. All of the items had acceptable univariate and multivariate normality. All variables used a 1 to 10 scale, with the exception of the covariates (phase, sites, subjects, and a dummy sampling variable).

Within the seven important areas identified by managers in the first qualitative phase of our research, adaptability was a formative construct consisting of protocol amendments, change order processes, managing protocol violations, and resolving queries. Only protocol amendments (β= .31, t = 2.67, p= .009) and change order processes (β= .31, t = 2.54, p= .01) were significant and only these two items were included in the adaptability index. Adherence was a reflective construct assessed by adherence to both the study protocol (γ= .88, t = 21.2, p< .001) and the medical management/safety plan (γ= .87, t = 15.3, p< .001).  Enrollment was constructed as a formative indicator and was assessed by evaluating the performance of the clinical study team on enrolling patients that met the criteria and keeping you up-to-date on the enrollment process as well as timeliness in first site, last site, first patient, and last patient. Only adhering to the timeline for enrolling the last patient (β= .87, t = 4.43, p< .001) was significantly related to enrollment performance and so was used as the enrollment indicator. Performance on the functions was structured as a reflective indicator and included project management (γ= .81, t = 18.9, p< .001), data management (γ= .90 t = 34.3, p< .001), regulatory (γ= .90, t = 40.8, p< .001), centralized diagnostic service (γ= .86, t = 25.1, p< .001), CRF tracking (γ= .83, t = 21.7, p< .001), and external data sources (γ= .87, t = 25.8, p< .001) and were all scaled into the latent functions construct. Performance of the project manager, site relations and routine monitoring visits were all measured as a single global indicator.8 All of the latent constructs had reliabilities > .89, discriminated from each other, and established acceptable validity.

Once the measurement model was established, we examined the magnitude of the relationships between the performance drivers and conduct quality in a regression equation model using SAS 9.3. All variables were mean-centered prior to estimation, so the coefficient describes the relationship of the predictor on conduct quality at the average of all the other factors.

 

Results

The model performed well (F10,71 = 29.2, p< .001) and explained a substantial amount of the variance (R2 = .80) of conduct quality. The means, standard deviations, and correlations of the variables used in the statistical analysis are shown in Table 1.

Table 1: Descriptive Statistics 

The relationship between the performance activities and conduct quality are illustrated in Figure 1. The magnitude of the coefficient describes the strength of the relationship between the driver and conduct quality. The coefficient for adaptability (β= .20, t = 2.12, p= .04) is interpreted, for example, that a 1 unit increase (on a 1 to 10 scale) in the adaptability of the study team improves conduct quality by .20 at the average levels of adherence, enrollment, functions, monitoring, project manager, and site relations. Given the scaling, you could also describe this as ‘a 10% increase in study team adaptability improves quality by 2%.’

 Figure 1: Drivers of Conduct Quality

There was a negative but non-significant relationship between adherence (β= -.11, t = -1.02, p= .31) and conduct quality. While the coefficient estimate is negative, the non-significant coefficient means that we could not identify a relationship between adherence (to the medical management, safety plan, and study protocol) and conduct quality.

Enrollment (β= .18, t = 2.32, p= .02) as positively and significantly related to conduct quality. Improving enrollment by 10% increased conduct quality by 2% at the average levels of all the other conduct quality drivers. The various functions (β= .13, t = 1.22, p= .22) involved in a trial did not improve quality. The project manager (β= .35, t = 3.69, p< .001) had the greatest impact on project quality. Improving project manager performance by 10% increased conduct quality by 3½%.

Managing site relationships (β= .31, t = 3.17, p= .002) had the second greatest impact on conduct quality. The routine monitoring visits (β= -.15, t = -1.86, p= .06), however, had a negative and marginally significant impact on project quality. There was not a significant (β= .02, t = 0.65, p= .51) interaction between site relationships and the routine monitoring visits. Regardless of the status of site relationships, the routine monitoring visits degraded conduct quality.

The covariates in the model, including phase (β= -.23, t = -1.25, p= .26), sites (β= .06, t = .93, p= .18), and subjects (β= .00, t = .04, p= .48) were all insignificant.

In summary, the most impactful drivers of conduct quality were the project manager and site relations. Enrollment and the adaptability of the study team had lesser but positive effects on conduct quality. Routine monitoring visits had a negative impact on quality.

 

 

Discussion

The ability of a manager to oversee a clinical trial depends on have scientific measurement instruments to provide a clear view of what is going on in the trial. We believe that all of the efforts and investments in clinical trial quality over the past decade have had a limited impact on trial quality because of the industry’s exclusive focus on operational metrics. We have using, in other words, the wrong performance metrics. It is remarkable that it is not common practice in the industry to directly measure the quality of clinical trials. The purpose of this paper is to illustrate how sponsors should measure the quality of clinical trials and identify the various performance drivers that drive quality.

A major contribution of this paper is to quantify the magnitude of the relationship between each of the performance drivers and conduct quality. This result allow managers to identify the most important and substantial drivers of quality. In this way, the approach to quality measurement is efficient because you can focus on the few drivers (i.e. project manager, managing site relations, project manager performance, and enrollment) that will impact conduct quality. At the same time, this approach is comprehensive. We know we have captured the major quality drivers because the R2 was 80%. While these analytical methods carry off-putting names like regression modeling, predictive analytics, or business analytics, these techniques can be performed on an Excel spreadsheet. These are basic analytical techniques that should be more often within the clinical trials industry.

Another contribution of this study is to directly measure the quality of the trial. This is unusual in the clinical trial industry. In our 5 years of research in this area, we have not yet been able to identify any previous assessments that captured the overall quality of a clinical trial. A critical step in clinical trial quality measurement is to recognize that a clinical trial is a service and not a manufacturing process. Quality measurement techniques, as a result, have a different look and feel than the operational metrics used in manufacturing. We occasionally find managers are uncomfortable with this services approach because they apply the manufacturing analogy to clinical trials. But using manufacturing instruments to measure service performance and quality is misguided and provides invalid and biased results.

One benefit to focusing on service quality is that it cuts across all of the organization’s silos. Within our measures, for example, included protocol amendments, medical management and safety plan, regulatory, external data sources – all coming from various parts of the CRO or sponsor organizations. Since service quality is generated from across the organization, it is important to draw assessments from across all parts of the company. We have found that C-level executives, in particular, appreciate this feature of quality measurement.

What should we make of the insignificant drivers of quality in this analysis? In interpreting these results, it is important to remember that this research consisted of two stages – the initial qualitative phase where experienced executives identified all of the key drivers of clinical trial quality. The purpose of the quantitative analysis in the second phase of the trial was to examine how changes in the drivers impacted quality. An insignificant coefficient simply means that changing the level of the driver does not change quality. An insignificant coefficient does not mean that it is not an important contributor to a quality trial. A driver’s importance was established when it was included in the executive interviews.

We understand insignificant drivers of quality to be a type of hygiene factor as compared to enhancing (a.k.a. motivating) factors. That is, there are hygiene drivers of quality that do not work to increase quality, but will degrade quality if they are not present. Enhancing factors increase quality as they are improved. The results of this study should not suggest that it is not necessary to adhere to the medical management, safety plan, or study protocol. Improving these factors, however, will not improve conduct quality.

Finally, the results of this study have three important implications for risk-based monitoring. First, the negative relationship of monitoring visits to conduct quality and the positive effects of site relations suggests a complex relationship between attempts to maintain data integrity and conduct quality. We believe that risk-based monitoring efforts must be guided by scientific and valid performance metrics using leading indicators of performance (e.g. adherence and enrollment performance). Attempts to use operational metrics to guide will lead to a backward view of the clinical trial (i.e. they are lagging indicators) and to gaming by the sites.

In summary, we believe that adopting scientific quality measurement that recognizes that clinical trials are a service will allow the clinical trial to reap the benefits of all their efforts and investments. At the same time, the approach described here meets regulatory requirements for oversight of clinical trials. The primary advantages of this approach are first, comprehensiveness. Our model captured 80% of the variance in conduct quality, meaning that we have identified the major drivers of quality. Second, this approach is efficient. Subjects took an average of 4 minutes and 38 seconds to complete the assessments. Third, these measures are meaningful. We established the validity of the assessments in the measurement model analysis. In comparison, operational metrics lack validity. Fourth, these measures are objective. Ideally, these assessments would be administered by an independent third party, but many of the interpersonal biases typical in assessments like this are eliminated because we are assessing organizational quality and performance. Finally, these assessments are reliable as established in the measurement model.

 

Note: Variables illustrated with solid colors are considered statistically significant (p>.05).

          Variables illustrated with cross-hatched colors are not statistically significant.

 

 

AUTHORS:

Michael J Howley PA-C, PhD, is the Associate Clinical Professor of LeBow College of Business at Drexel University, Peter Malamis MBA, is the CEO, CRO Analytics, LLC.

 

 

 

 

 

References

[1] M. Howley and P. Malamis “The Quality of Clinical Trials.” Applied Clinical Trials, available at http://www.appliedclinicaltrialsonline.com/quality-clinical-trials, (accessed on February 20, 2015).

[2] M. Howley, “How We Fall Into Metrics Malpractice,” Applied Clinical Trials, May 27, 2014, available at http://www.appliedclinicaltrialsonline.com/node/245666 , (accessed on February 17, 2015).

[3] M. Howley, “The Next Generation of Clinical Trial Performance Measurement,” Applied Clinical Trials, Feb 27, 2014, available at http://www.appliedclinicaltrialsonline.com/next-generation-clinical-trial-performance-measurement, (accessed on February 26, 2014).

[4] United States Department of Health and Human Services Food and Drug Administration “Guidance for Industry: Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring,” August 2013 (pg.2).

[5] “Reflection Paper on Risk Management in Clinical Trials,” European Medicines Agency, November 18, 2013 (pg. 7).

[6] L. Bergkvist and J.R. Rossiter. "The Predictive Validity of Multiple-Item Versus Single-Item Measures of the Same Constructs." Journal of Marketing Research 44(2) 175-184 (2007).

[7] R. Johnston, "The Determinants of Service Quality: Satisfiers and Dissatisfiers." International Journal of Service Industry Management 6(5) 53-71 (1995).

[8] F. Herzberg, B. Mausner and B. Snyderman, The Motivation to Work, (Wiley, New York, 1959).

[9] M. Howley and P. Malamis, Clinical Trial Performance Measures You Can Use (and believe), Clinical Leader, October 2013 http://www.outsourcedpharma.com/doc/clinical-trial-performance-measures-you-can-use-and-believe-0001

Related Content
© 2024 MJH Life Sciences

All rights reserved.