Performance Metrics: Optimizing Outcomes

Article

Applied Clinical Trials

Applied Clinical Trials SupplementsSupplements-10-02-2005
Volume 0
Issue 0

A clearly defined set of performance measures is an integral part of the central laboratory selection and management process.

A variety of economic and social pressures have fueled a widespread search for reductions to the cost of research and development (R&D) in the health care industry. Pharmaceutical companies (pharma) and clinical research organizations (CROs) have been working vigorously to improve their productivity and return on investment by modifying processes, decreasing cycle times, and implementing improvements wherever possible.1 Furthermore, experts believe that pharma companies previously averse to strategic alliances are being required to find more long-term development partnerships. This trend is expected to continue due to the enormous competitive emphasis on demonstrating a positive return on R&D investment and continuous need to bring significant new products to market.2 While ultimately beneficial, management of relationships with multiple strategic partners further complicates efforts to reduce the cost of R&D, improve operational efficiencies, and benchmark performance. Consequently, the need to better manage the efficiency, productivity, and economics of long-term drug discovery through the use of performance metrics has become a top priority for outsourcing teams involved in clinical trials. The value of performance metrics has taken on a dual meaning: to measure and compare performance between service providers and to analyze operational proficiencies that reveal potential elements to be adjusted during the conduct of the clinical trial.

One aspect of clinical research that offers opportunity for application of performance metrics is that of central laboratory services. A central laboratory is a network of people, capabilities, and facilities structured to process human specimens for diagnostic testing that generates information subsequently reported to participating clinicians and the organization sponsoring the research. The primary role of a central laboratory is to obtain accurate, timely diagnostic testing data to support a clinical trial protocol.

Potential benefits to central lab metrics

Due to the diverse nature of clinical research, many companies have invested a substantial amount of time and resources to develop standard central laboratory performance metrics that are used to benchmark service providers. When the design and application of central laboratory performance metrics are done well, a number of benefits can be realized. These include:

  • Metrics can communicate and reinforce a clear set of standards for performance and provide measurement of performance against those standards.

  • Identification of areas for improvement and potential issues can be achieved in a timely manner, allowing for early corrective action.

  • Performance against key metrics can provide an objective source for feedback to the central laboratory and serve as a means of comparison between central laboratory service providers.

  • Measurement of overall progress of the clinical trial and forecasting of future trends can be accomplished with the use of selected metrics.

  • Financial and budgetary status measurement and forecasts can be provided through analysis of performance metrics.

  • The fact-based and objective information and feedback provided by well-designed metrics serve as a sound foundation for fair and consistent relationships between the sponsor client and central lab provider.

Considerations in applying metrics

Full realization of the benefits available through the use of central laboratory performance metrics requires thought and planning in their design, selection, and implementation. Some key considerations in successfully implementing and managing the overall process include:

  • Use of performance metrics is a powerful asset in managing the performance of central laboratory service providers, but it is not a panacea, nor a substitute for a robust process for relationship management. Having realistic expectations regarding the utility and impact of metrics on the overall relationship is essential.

  • The primary strategic goals driving the performance metrics process should be directly reflected in the design, implementation, and management of the metrics utilized. For example, if financial control is a primary goal of the process, appropriate metrics that provide insight into financial status should be included in the process.

  • A disciplined approach to application of performance metrics is critical in keeping the process manageable and meaningful. The total activities contained within the central laboratory process offers a high number of variables, which can be measured and subsequently used as performance metrics. Unless discipline in selection is maintained, an overabundance of measurable parameters can easily be selected, creating significant work burden both at the central lab (in gathering and arranging the data) and sponsor (who read, analyze, and interpret the data). Selection and management of performance metrics using the "vital few" approach greatly enhances the overall impact of a metrics strategy.

  • The performance metrics chosen should be relevant to the role of a central laboratory within the clinical trial process. Measuring study enrollment as a central laboratory metric, for example, is at best an indirect measure of lab performance; the lab has little impact on enrollment beyond the provision of the specimen collection kits to the investigator site, which is a relevant and measurable indicator of central lab performance and therefore a useful performance metric.

  • The performance metric should provide information and insight that is actionable by both the central lab and sponsor. This is a key component of the "vital few" selection process noted above. Given the information technology capabilities currently available, it is simple to track variables down to parts per million impact levels, but whether an error rate finding of 3 parts per million on a central lab test process is actionable in the sense of possible improvement is questionable, thus making such a measurement a less than ideal candidate as a performance metric.

  • Use caution in the degree in which data from performance metrics are used to forecast study progress, in both the operational and financial sense. The significant number of variables involved in the clinical trial process makes "straight line" forecasting a challenge; using metrics as indicators or confirmation of progress (versus absolute forecasting) is more suitable for the information gained by most metrics.

  • The performance metrics should be applied consistently from design through implementation and use; the metrics used should periodically be reviewed for impact and utility—those rarely noted or used can be eliminated and replaced with more meaningful measures.

  • Maintain discipline in the use of performance metrics as part of an overall relationship management strategy and avoid using metrics as a punitive device or weapon.

Designing a scorecard for central laboratory services

The responsibility for developing a performance metrics scorecard is equally placed on the sponsor and the central laboratory participating in a clinical trial. When a critical, actionable, and measurable metric is identified and mutually agreed upon, process improvement is enabled. Recently, some companies have undertaken to collect actionable metrics as a means for facilitating rapid change through performance gap analysis while a study is underway, and although outsourcing groups have traditionally used metrics primarily to manage costs, recent reports indicate that their primary responsibility is to increase supplier productivity.3 Therefore, selecting an appropriate set of performance metrics is critical to both the sponsor and the central laboratory. As noted above, in the clinical research environment, the use of metrics can be overdone considering the scope of an average global clinical trial in a challenging therapeutic area, and decision makers can find it very tempting to track a large number of metrics, which can lead to a disproportionate amount of time being spent on managing the data as it accumulates over time. Therefore, clinical researchers should adopt a "vital few" or "less is more" approach to collecting only those metrics that are actionable or meaningful.

A combination of laboratory metrics, financial outcomes, and study team satisfaction ratings are typically the key components of a comprehensive central laboratory services scorecard. These categories encompass the performance expectations for the various interested parties within the sponsor organization. The attempt to drive process improvement is the premise by which performance metrics are identified and categorized; however, it is unfortunate that outcomes are analyzed more routinely than processes. A well-balanced scorecard should incorporate a combination of both process-based metrics and outcome-based metrics.

While financial outcomes are bottom-line oriented, they are not particularly useful in driving rapid change nor do they routinely impact productivity. The value of financial metrics is to provide information regarding the actual "burn rate" of the project budget in terms of service fees, pass-through costs, and other expenses.

Study team satisfaction ratings are becoming more common on scorecards because they offer easily quantifiable metrics that reveal trends regarding the customer–sponsor relationship. Depending upon the methodology used to obtain customer feedback, a detailed view of functional areas can be highlighted to uncover specific information regarding opportunities to increase efficiency and improve outcomes. This information can be used to impact both short-term changes during the conduct of a study and to illuminate issues that require long-term process enhancements.

Categories of central laboratory performance metrics

Well-designed central laboratory performance metric categories essentially reflect the scope of responsibilities of a central laboratory within the clinical trial process. There are three general categories of scorecard metrics: Study Start, Study Management, and Study Close. The purpose for grouping performance metrics into these categories is to differentiate between functional areas and as a means for driving process improvements. These categories are comprised of fundamental processes and critical outcomes, including many that are project specific. In practice, grouping project-specific metrics under these categories is the first step toward identifying a set of common benchmarks to use across the central laboratory industry. Outcomes-based metrics are considered by industry experts to be common to central laboratories, i.e., number of kits shipped, number of demographic holds, TAT of specimen results, and time from LP/LV to database lock. Since performance metrics must be actionable in order to drive change, they must focus on processes and standard operating procedures (SOPs), which vary from company to company.

Performance metrics that pinpoint operational processes can drive efficiencies and improve productivity. These process-based metrics uncover issues that can sometimes be resolved by modifying SOPs, increasing training or allocating additional resources to meet study timelines and contractual obligations. For example, in order to measure improvement over time, a metric that measures how a patient specimen is handled is much more valuable than one that measures the outcome, i.e., the number of specimens handled.

Evaluating processes by measuring outcomes

Within each category there are specific processes and outcomes to be analyzed that reflect progress toward achieving study goals. The Study Start processes are arguably the most important ones to support a clinical trial due to their direct impact on project timelines, which in turn significantly impact the overall project budget. Therefore, the activities that surround communication, setting expectations, and establishing project timelines are critical. The number of change orders (CO) made to the original, approved scope of work (SOW) is a key indicator of whether the initial communications were well managed. Another example is adherence to study timelines that control getting the initial test supplies to the study sites, programming the study database, and validating the database. Outcomes associated with each of these processes are completion of the tasks on time and the number of days to complete the task.

Once a study has been initiated and sites are enrolling patients, Study Management performance metrics are tracked and analyzed through the duration of the study, which can run for months or years. Performance metrics collected during this phase have the greatest impact on driving process improvements, assuming that the metrics selected are actionable and can positively impact study outcomes. For example, once sites receive their initial supplies, the processes that control resupply and getting specimens to the laboratory become an ongoing priority for project managers and coordinators assisting with materials and logistics. Shipping specimens to the central laboratory for testing in accordance with project specific requirements is only the first step in a series of interrelated Study Management processes and outcomes. Other examples of well-known study management performance metrics include: turn-around time (TAT) of laboratory results, reporting results to sites, percent of demographic holds, and percent of data corrections. Outcomes associated with these processes usually involve time requirements to complete the task, although actual counts and percentages are also frequently tracked project outcomes.

The completion of a study, in terms of central laboratory responsibilities, is usually defined as the period from last patient/last visit (LP/LV) to final database transfer. The processes and outcomes associated with Study Close performance metrics are often very few. Most common are activities that occur between LP/LV to database lock and database lock to final database transfer. An outcome associated with each of these processes is either time-related or a measurement of the number of transfers that occurred.

Figure 1. Common central laboratory performance metrics by study phase.

Figure 1 indicates common central laboratory performance metrics by study phase. Some performance metrics such as financial measurements and customer satisfaction ratings are outcomes that transcend each of the three metric categories, as these outcomes can be linked to different processes in more than one category. Examples of financial outcome metrics include percent of total spend versus budget forecast and deviation from original project budget due to change orders. Customer satisfaction performance metrics can also be very general if the ratings measure subjective questions such as "comparison to other central labs"; however, specific feedback should also be obtained that highlights operational performance in one or more of the three study categories. An example of a customer satisfaction outcome is the number of comments made about a specific area of operations. Figure 2 indicates examples of outcomes used on central laboratory scorecards.

Figure 2. Examples of outcomes used on central laboratory scorecards.

Summary

A clearly defined set of central laboratory performance metrics is essential in selecting and managing a central laboratory service provider. An optimized set of metrics is a valuable asset in managing the sponsor–supplier relationship and can be used to drive efficiencies during the life of the trial by focusing on both outcomes and processes. Measurement of operational processes helps to pinpoint areas for improvement, which are in turn measured by study outcomes. These resulting improvements drive long-term efficiencies and have a direct positive impact on productivity and managing the cost of research and development. By following the "process versus outcome" model and designing and implementing relevant and meaningful central laboratory performance metrics, sponsor teams can develop well-balanced scorecards that accurately measure central laboratory performance.

Mark M. Engelhart is vice president, sales & marketing, Anthony J. Santicerma, MS, is director of strategic alliance and global marketing, and Jay E. Zinni, MBA, is manager, bids & contracts, all with Quest Diagnostics Clinical Trials, 1201 South Collegeville Road, Collegeville, PA 19426, (610) 454-6542, fax (610) 983-2120, email: Anthony.J.Santicerma@questdiagnostics.com

1. E. Pena, "Making Metrics Matter: The Changing Paradigm of R&D Metrics," PharmaVoice, 8–20 (March 2005).

2. L.D. Fitzsimons, "Stronger Together," R&D Directions, 11 (5) 36 (May 2005).

3. K.A. Getz, "Entering the Realm of Flexible Clinical Trials," Applied Clinical Trials, 14 (6) 44 (June 2005).

Related Content
© 2024 MJH Life Sciences

All rights reserved.