Measures of Success

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-04-01-2005

Certain clinical project metrics are heavily influenced by country-specific regulations or conditions, such as IEC review times.

The head of your company's clinical development department requests you to draft a project plan for a multinational Phase III trial of 800 asthma subjects within three weeks. This plan should be quite specific regarding the number of countries and study sites involved, the subject recruitment period, and the expected total cost of monitoring visits.

Table 1A. R&D macro benchmarks

You are in a considerable dilemma. You have been opposed to implementing a proper performance metrics system. What tools should you now use? Memory? Intuition? Best guesses? Will the development head be impressed with you at the conclusion of your project, should this ever happen, if time and money spent exceeds your plan by as much as 100%?

Table 1B. Micro benchmarks: Choice of clinical project cycle times and quality project metrics

The terms "development cycle times," "performance metrics," and "benchmarking" are terms you remember well. Admittedly, you have only superficially read some topical articles. However, being so extremely busy and not really convinced that those publications were strictly applicable to your work situation, you did not further pursue the idea of implementing a performance metrics system in your environment.

Figure 1. Variance in planned versus actual recruitment. Analysis based on N=9 studies initiated in 2002. The difference (days) between the plan as specified in the initial protocol outline and the actual recruitment period (first patient in to last patient out) was compared. The overall median was three days ahead of plan.

Despite or because of all the pressure on you, you are a solution-oriented manager. You identify quickly that the only way to address the challenge imposed on you is to send urgent requests for proposals (RFPs) to five contract research organizations (CROs). Your interpretation of the proposals' operational plans will allow you to deliver the project plan on time. Your best guess of country numbers, site numbers, time needs, and resource needs depends on how many CROs respond in time, and how robust their metrics databases and current feasibility surveys are.

Table 2. Operational capacity and productivity metrics

Suddenly, you awake. This was only a bad dream. Your team and all other operational departments did their homework a few years ago. You introduced a performance metrics system in all operational departments, at project level, and across all countries in which your company maintains development operations. You use metrics to track the efficiency of the cooperation with study sites and CROs. As a result, you can easily write a study plan in line with the above specifications.

Figure 2. Monitoring performance over a four-year period. The median number of enrolled patients for 10 countries is shown per clinical research associate (CRA) FTE and per FTE for all staff within the entire clinical operations group.

Reasons for a performance metrics system

Pharma industry associations and cooperating institutions like the Tufts Center for the Study of Drug Development and the Centre for Medicines Research International Ltd. (CMR) maintain sophisticated systems for R&D timeline, cost, and productivity tracking based on the input from research-based pharma companies. They generate benchmarks for development of new medicines, as well as their relationship to output in terms of new active substances (NAS) that obtain marketing authorization.

Table 3. Site-specific performance metrics

Disproportion of development time/cost and number of new product launches. Total development time from preclinical testing to marketing authorization increased from 8.8 to 13.9 years between the 1960s and the 1990s.1 Of 5000 screened compounds, only five reach Phase I of clinical development. Ultimately, one NAS obtains a marketing license.2 The latest estimate puts the average R&D cost for one NAS reaching the market at $802 million. This translates into an almost six-fold increase between 1975 and 2000.3 This is also reflected in the dramatic increase in the annual global pharmaceutical R&D expenditure, which more than doubled between 1990 and 2000, from about $20 to 43 billion. Disappointingly, the output in terms of annual NAS marketing authorizations reached an all-time low of 26 in 2003.4

Clinical development is the major cost driver. The duration of the clinical phases rose from 2.8 to 6.3 years between the 1960s and the 1990s.1 In 1997 and 1998, the mean duration of clinical development of a new chemical entity (NCE) ranked at 5.4 years, down from a maximum of 6.9 years between 1990 and 1993. For biopharmaceuticals, the mean duration was 5.2 years, the highest-ever average. Depending on the therapeutic class, an average of 3.4 (AIDS antivirals) to 7.5 years (gastrointestinal) was spent in the period between 1995 and 1999. Despite the high attrition rate from discovery to first use in man, clinical development absorbed 37.8% of the total 1999 R&D expenditure.5

Figure 3A. Expected differences in total enrollment duration for three scenarios based on the companys actual metrics data. Anticipated enrollment duration was calculated based on median enrollment rates (in patients/center/month) from previous trials for the respective therapeutic area. The country selection and site numbers are shown in Table 5. Left bar: scenarios with 128, 51, and 45 sites. Right bar: All scenarios with 128 sites, showing the effect on enrollment duration. Taken from Ref. 10.

Clinical trial performance tracking by all involved players is mandatory. The workload in clinical trials is ever-increasing. Global and more stringent clinical trial regulations, maximum demands on subject protection, documentation of processes, data quality and validity, system validation, and pharmaceutical quality of the investigational medicinal product have been large contributors to the workload. Trial sponsors, their increasing number of contractors, and investigational sites are all bearing equal shares in this joint endeavor. In view of this mounting complexity, it became mandatory to track where the resources in terms of people, money, and time are spent, and which impact various measures have on productivity and efficiency metrics over time.

Figure 3B. Estimated total monitoring visit costs for scenarios 1-3. The monitoring costs for the number of site visits were estimated for each scenario based on the country and site numbers given in Table 5, assuming a six-week monitoring visit interval, a six-month patient treatment period, and an average cost of 800 f per monitoring visit. Taken from Ref. 10.

Performance metrics used in clinical development

The large stages of research and development, known as R&D macro benchmarks, are illustrated in Table 1A.

Table 4. Outsourcing performance metrics

The causes for below-industry-standard performance and areas for improvement can only be identified and addressed if all contributors to a new drug application (NDA) track those key performance metrics that they impact.

Operational departments such as clinical monitoring, data management, and clinical quality assurance (CQA) track capacity and productivity by selected use of metrics (Table 2). Clinical project management or clinical operations collects project metrics from every clinical trial conducted. A variety of so-called "micro benchmarks" are available for scrutiny as outlined in Table 1B.

Certain clinical project metrics are heavily influenced by country-specific regulations or conditions, e.g., regulatory authority or independent ethics committee (IEC) review times. In consequence, tracking such metrics at the country level is advisable. They may be displayed as multidimensional project metrics, underlining the interdependencies between project timeliness, cycle time, quality, and efficiency.6

Table 5. Construction of three proposed trial scenarios based on actual past performance metrics

Study site metrics keep track of critical steps and/or elements at the source of clinical data generation. Both sites and study sponsor benefit from the information that can be obtained from the metrics shown in Table 3.

Multidimensional site metrics7 graphically demonstrate the correlations between single metrics, such as number of subjects per site, time spent for site selection per site, number of queries per subject, and the monitoring time per enrolled subject. Using such approaches in multinational projects can be of enormous value for forecasting purposes in comparable future studies.

CROs, as highly important contributors to today's development output, use operational metrics that match those of their customers. Certain pharma companies make cycle times track records a prerequisite for contracting with a CRO8,9—both quality and productivity performance are of utmost relevance to the prospective customer. In addition, CROs use utilization (refer to Table 2) more consistently as departmental metric than pharma does. The metric realization compares the achieved to the contracted revenue. Other, more sophisticated calculations are used as well. High productivity and quality must translate into profitability of their endeavors, should their development business remain viable.

Outsourcing performance metrics, such as indicators of the efficiency and productivity of the cooperation between sponsors and CROs, seems to be the logical missing link in the clinical development production chain these days. Selected time periods and other indicative metrics of potential value10 are displayed in Table 4.

How to launch and maintain a metrics system

Senior management must demand and drive improvement of clinical development productivity, and its transparency to all hierarchy levels. Current processes, tools, resource utilization, and clinical trial tracking systems have to be evaluated for effectiveness. Gross performance deviations from industry standards should be addressed as first priority.

Operational task forces develop the principles of the new system via either an independent in-house manager or an external consultant. The information technology (IT) department has to be represented in the task force.

Once the direction is identified and agreed to, IT should receive the funds to set up a new performance metrics database. The data for this database usually derive from sources originally set up and maintained for other purposes—clinical trial management system, human resource database, project finance tracking system. This avoids duplication of data entry efforts. Headcount is allocated to oversee the generation of management reports, and to analyze, interpret, and summarize the findings across the organization. This staff also assumes responsibility for the benchmarking of the internal progress in the light of best industry practices.

The various department heads receive reports and share the key conclusions with their staff. Achievements are the basis for periodic assessments for further improvement at the departmental level. Department heads and their staff work toward and agree on new performance targets. Functional departments assume ownership of "their" performance metrics and continually strive towards optimization without fear of unreasonable pressure and personal punishment. The development staff obtains this information in the context of the "big picture" of the company's R&D efforts. This includes time spent for go/no-go decisions at the management level between cycle times, and of the fundamentals of the entire business performance in a "digestible" format.

Collected project metrics are used as required by new development programs on the horizon. As a principle, only experience in the same indication and of similar country distribution is used for reference purpose. And that has to be substantiated by newly set-up feasibility surveys.

Internal achievements and external benchmarks

The macro-cycle times of the past decade might look discouraging, if not analyzed for confounding factors. From 1994 to 1999 there appeared to be a decrease in drug development from more than 12 to a little less than 10 years,

5,11

driven by decreases from first submission to first launch and first patient dose to first pivotal dose.

5

This trend was, however, of a temporary nature. The latest analyses

4

refer to a 12-year development time again.

These data do not account for the time spent for go/no-go decision-making, which is outside of any operational productivity impact. Neither do these evaluations account for the extent of regulatory requirements and related documentation workload, which have dramatically increased from the early nineties. The shortening of the authority review cycle time is probably a product of both strengthened productivity at the regulators level and also a reflection of higher dossier standards based on studies conforming fully to ICH GCP and other applicable regulations.

From 1996 to 1999, the cycle times from protocol approved to first patient first visit (FPFV) increased by 15%.5 This may be a result of inefficient start-up processes, a slow take-off at the study sites, or a more thorough and hence longer review process of clinical trial applications at the authority and/or IEC level. The cycle times of last patient last visit (LPLV) to database lock (DB lock) and from DB lock to key statistical analysis decreased to 110 and 50 days, respectively. They can probably be improved further. The entire subject enrollment period (300 days) and the time from key statistical analysis to final study report (150 days) remained essentially unchanged.

Implementation of a performance metrics system applying prospective metric targets at an individual company level can lead to major improvements.8 In our experience, time targets of protocol approved to Essential Document package approved and from package approval to FPFV could be reduced by 10% and 30% respectively, one year after launch of the performance metrics system, independent of the clinical trial indication.10 Subject enrollment was ahead or within plan in six out of nine studies analyzed (Figure 1), compared to 86% of studies being behind plan for the industry.12

There were further achievements due to an internal performance metrics system applied via a three-year goal-setting program.10 They included cutting enrollment time into half, greatly increasing planning accuracy with respect to time to FPFV and enrollment time, and increasing the proportion of active sites from 65% to 88% of all initiated sites. Patient enrollment rates (patients/site/month) also matched or exceeded industry standards in the majority of countries.

Reaching specific metrics targets may be at the cost of other related aspects, and will require corrective action. For example, a monitoring team has managed to carry out an average of 10 site visits per month. But the median time to complete a monitoring visit report now exceeds 10 days and the internal audits provide evidence of decreasing quality. Given these circumstances, this should be the time to think of more training or system improvements and also the creation of more feasible metrics targets. There seems to be a maximum number of CRF pages and, hence, number of subjects that can be monitored by one CRA per year, as illustrated in Figure 2. For obvious reasons, this metric depends on the clinical trial indication and its pertinent treatment duration.10

The value of metrics for creating new study plans

The collection of metrics in the same indication and in a comparable geography over three years allows construction of three proposed scenarios. The hypothetical multinational trial was for 800 asthma patients (Table 5) with a significant variance in enrollment duration and total monitoring visit costs (Figures 3A and 3B). Note how the key drivers, enrollment rate per site and total number of subjects per site, allow for lower total site number with comparable enrollment times, but considerably lower monitoring visit needs or greatly decreased enrollment time at an equal site basis. If only economics is superimposed and no strategic regulatory reasons, the likely choices are evident.

This example demonstrates one of the key values of such data. A huge amount of money and time could be saved if we utilize the corporate wisdom derived from properly maintained performance metrics systems for the sake of more effective future developments.

Key success factors and conclusions

The data available from decades of benchmarking in clinical development provide clear evidence of the excessive absorption of time and cost before marketing authorization. The goal-setting of strengthened productivity while balancing capacity is not only legitimate but a must in view of three factors. These are the natural resource limitations, the economic pressures on the health care market at large vis-á-vis the unmet therapeutic needs requiring innovation that remains affordable. All players, operational departments, investigational sites, and clinical project leaders have a role in streamlining the processes and making drug development more cost-effective.

Out of the numerous performance metrics, it is advisable to select and track five to six key performance indicators at the department or project level that will change behavior and drive performance. The right time-points for collection—whether monthly, quarterly, annually or at the completion of tasks—are crucial. Past performance should set the stage for the new performance metrics targets. These targets must be demanding but achievable. Outperforming on one metric target might have a negative impact on others. Consequently, interdependencies have to be observed and acted on. Metrics should not be changed before sufficient data supporting sensible conclusions are available. Appropriate comparisons across therapeutic areas, projects, and countries are a prerequisite for meaningful decisions on future targets or trial programs. Joining large benchmarking studies is advisable despite the associated cost and efforts needed to maintain them.

Performance metrics systems can be valued as a change management trigger. Their costs are justified if the system provides value in terms of shortening cycle times, improvement of efficiency, and quality. Disappointments and frustrations of measuring clinical development productivity in the early days due to insufficient system support, duplicate records, and manual tracking can now be avoided. With suitable software, a thorough understanding of underlying business processes, and a commitment to continuous improvement, these challenges can be met.

Implementing a performance metrics system will be of great value now and in the future. Focusing on performance indicators that drive change and positively impact productivity while remaining in full regulatory compliance is an unwavering principle. The sum of all efforts at all levels across companies will eventually reduce cycle times in clinical development and bring innovation sooner and in a larger scale to those who need it most—the patients.

References

1. K.I. Kaitin, "Drug Development Timelines Today and Tomorrow—The Challenge of Pharmaceutical R&D," Presented at the ECPM (European Center of Pharmaceutical Medicine) Course, Session 1, 2 October 2001.

2. Pharmaceutical Research and Manufacturers of America (PhRMA), Pharmaceutical Industry Profile 2004, Washington, DC; PhRMA, 2004.

3. J.A. DiMasi, R.W. Hansen, H.G. Grabowski, "The Price of Innovation: New Estimates of Drug Development Costs," Journal of Health Economics, 22, 151-185 (2003).

4. Centre for Medicines Research International Ltd., "Innovation on the Wane?"; news downloaded from www.cmr.org on 9 August 2004.

5. S. Walker, "Trends in Global Drug Development," Presented at the ECPM (European Center of Pharmaceutical Medicine) Course, Session 1, 2 October 2001.

6. D.S. Zuckerman and M. P. Cole, "Taking the Pulse of Pharma-CRO Relationships: The Searle-CRO Metrics System," Presented at the Pharmaceutical Outsourcing Management Association Annual Meeting, April 1999.

7. R. Davie, "Building a Successful Relationship with Your Customers—A CRO Perspective," Presented at IQPC's 5th Conference in the Clinical Trial Series, Brussels, 25 March 2004.

8. D.L. Anderson, "A Guide to Patient Recruitment. Today's Best Practices and Proven Strategies," CenterWatch, 2001.

9. T.J. Hill, "Can CROs Measure Up to Their Claims?" Scrip Magazine, 47-49 (March 2002).

10. J. Schenk and A.K. Hajos, "Better Use of Metrics within Clinical Research," Focus Session at the 11th Applied Clinical Trials European Summit, Munich, 13 May 2004.

11. Tufts Center for the Study of Drug Development Impact Report July 1999; 1.

12. M.J. Lamberti, ed., An Industry in Evolution, 4th Ed. (Boston, MA, Thomson CenterWatch, 2003).

Acknowledgement

The authors would like to thank Steve Demas, ALTANA Pharma Inc., Canada for helpful suggestions and proofreading of the manuscript.

Dr. med. Johanna Schenk,*FFPM, is senior partner and managing director, PharmaProjekthaus GmbH & Co. KG, Altenhoeferallee 3, 60438 Frankfurt am Main, Germany, +49 69 951172-10, fax +49 69 951172-11, email: johanna.schenk@pharmaprojekthaus.com. Antal K. Hajos is director clinical development/clinical operations, ALTANA Pharma AG, Byk-Gulden-Strasse 2, 78467 Konstanz, Germany.

© 2024 MJH Life Sciences

All rights reserved.