Standardized Metrics for Better Risk Management: The Right Data at the Right Time

Article

Just because information can be gathered and shared more quickly among stakeholders does not mean that it can identify risk. This article describes the need for pharma to adopt fully-vetted, standardized operational-level time, cost and quality performance metrics as tools for tracking and predicting performance.

The proliferation of cloud-based technologies has made it easier for stakeholders to collect and share performance data that flow into data analytic tools. To access these data, sponsors and contract research organizations (CROs) have invested heavily in electronic data capture (EDC), eSource, and other solutions since data from these systems can be aggregated into a single database to run data analytic reports. Metrics found in these reports provide the foundation upon which emerging clinical trial strategies, namely risk-based management (RBM) and quality by design, can be implemented, with an eye toward better performance. In keeping with this effort, organizations such as TransCelerate1 and Metrics Champion Consortium (MCC)2 have been developing proactive methods and tools around risk assessment and management to improve clinical trial quality while increasing patient safety.

But-just because information can be gathered and shared more quickly among stakeholders-does not mean that the data contained in the reports, or the resulting metrics can identify risk. And are they actionable, meaning can they help stakeholders make decisions about when to take action and when to monitor? Specifically, when organizations attempt to integrate data across studies-from multiple vendors and systems-they discover that some data fields are not defined consistently. Moreover, if study teams are utilizing customized metrics, it becomes even more difficult to determine which data to use, undermining the validity of the analysis.

A simple metaphor offers perspective. Suppose two families in identical cars are about drive through the desert on a summer road trip to the same destination. Before embarking on the trip, each family makes preparations to address the key performance question: What is the likelihood of making it across the desert without mishap?

Family A researches how to prepare, and downloads a list from a highly rated automobile travel club website detailing items to consider before attempting a desert driving trip. This family follows the advice, and stocks up on items from the auto club list, such as food, water, spare tire, full gas tank, map, and mobile phone chargers. They also have the car checked by a mechanic to minimize risk of major problems. Family B creates its own checklist, and addresses the obvious items such as food, water, and sufficient gasoline, but it does not occur to them to have the car checked by a mechanic, among other safety suggestions. Of these two approaches, which is the better way to prepare for the trip and achieve the desired outcome-completing the trip without mishap?

This metaphor for clinical trials is all about proper planning and risk mitigation, determining what needs to be measured, and when to take action. For every measure that is defined, what is the key performance question it is trying to answer?

This article describes the need for the industry to adopt fully-vetted, standardized operational-level time, cost and quality performance metrics as critical tools for tracking and predicting performance. Standardized performance metrics have standard definitions of key terms and study milestones, along with performance targets to ensure that metrics are measuring the right factors in the right way. Driven by competitive and regulatory pressures,3,4 and the notion that “you can’t improve what you don’t measure,” the goal of standardized metrics is to plan and predict performance by identifying known risks, followed by a proactive stance on risk management, including upfront assessment and mitigation and ongoing risk monitoring.

 

Becoming strategic


More than 15 years ago, the Tufts Center for the Study of Drug Development (CSDD) reported on the importance of using standardized performance metrics to evaluate clinical development across the industry.5 Embracing this concept, the Metrics Champion Consortium (MCC) was created several years later to bring stakeholders together to define standardized performance metrics that organizations could use to drive process improvement. MCC members have established clear definitions of the performance metrics, including key terms and data elements, such as site activation date and database lock date (Table 1). Defining key terms is essential for apples-to-apples comparisons of current versus past performance. In addition, organizations report other benefits such as establishing expectations and facilitating adoption of best practices (Table 2).


This industry is a latecomer to standardized definitions for performance metrics, but the sharp focus on improving productivity while reining in costs is now driving demand for detailed analytics.6 Yet, simply collecting analytics is not enough. They need to be actionable, providing sufficient information for users to make decisions about adding resources or conducting a root cause analysis to determine how to fix an issue. When possible, metrics should be leading indicators providing results that help identify opportunities to impact the direction of the study.7 For example, if a number of sites have only enrolled one or two subjects months after initiation, this may be a leading indicator that factors are present that may cause future delays.

An insightful article by Rick Piazza notes that reports generated by analytic tools are informative as they indicate the overall status of a project, and the metrics they generate may focus attention on outliers or trends at the study level.6 Rarely, however, do these reports provide enough actionable information to make an impact at the organizational level.6 To be actionable, operational metrics should be data-driven, standardized across studies, indication, and therapeutic areas, and be timely.

Closing the gap between collecting information from disparate systems and making that information actionable is the purpose of MCC and research conducted by industry consultant Margaret Fay. She explains, “Well-defined metrics form the foundation for a continuous feedback loop known as ‘Plan, Do, Check, Act,’ an established business framework dating back to the 1950s.8 It can be applied to the formation of risk management and mitigation plans to limit risk upfront in clinical trials, instead of reacting after the fact.” (Figure 1)

This effort represents a paradigm shift toward statistical modeling that uses built-in risk indicators to trigger action.9 This starts with identifying known risks upfront, based on past performance, knowing which key performance questions need to be answered, not using a metric in isolation,10 and making certain that enough usable data are collected to make a proactive analysis.

Taking this approach helps stakeholders perform surveillance to assess the likelihood and severity of potential problems. Consider the two families driving across the desert. Does each family’s car provide information about whether a potential risk is becoming an issue? If yes, do they know how to interpret the data and when and how to take action?

 

Too much data

When performing risk assessment, the issue of collecting “enough data” is critical, but collecting too much data, has emerged as problematic as technology adoption accelerates.11 A recent survey of technology adoption suggests that electronic solutions are in heavy use, with EDC topping the list.12

With reliable technology, collecting data is far easier than the paper-driven methods of yesteryear, but this has resulted in excessive data flowing into analytic tools, much of it irrelevant for conducting surveillance. Applying this situation to the drive-through-the desert metaphor, collecting data on the number of fast-food outlets along the route, or ability to receive satellite radio signals, while interesting, is not meaningful for achieving the goal of making it across the desert without mishap. Similarly, the clinical trials industry continues to collect the same data for every trial, mostly in check-box fashion, disregarding its relevance to the risk and performance areas of the study.


This morass of information weighs down the risk assessment process, much as it does in other industries. In 2001, Michael Hammer, a management consultant, confronted this issue13 by honing in on the importance of defining, measuring, and improving processes. He reported that a company’s measurement systems typically deliver a blizzard of meaningless data (Table 3). Keith Dorricott made a similar finding and reported that only a small number of items-key performance indicators-need to be measured.”14

According to a study conducted by the Tufts CSDD, 24.7% procedures performed in Phase III protocols and 17.7% of Phase II procedures per protocol have been deemed “non-core," resulting in the collection of data that do not support primary endpoints, but rather supplemental secondary, tertiary, and exploratory ones.15 The notion holds true in the performance-metric arena as well, as many organizations collect and report numerous time-related performance metrics, but few quality metrics.

A published Medtronic case study depicts the value of limiting the amount of data collected to what is needed to support primary and secondary endpoints.16 Medtronic was interested in accelerating closeout of a study with 1,500 subjects conducted at 45 sites. Given Medtronic’s monitoring methodologies at that time, Margaret Fay, charged with overseeing the clinical trial, estimated it would cost in excess of $21 million to monitor the data and address risk factors, such as substantial numbers of unresolved queries, and a lengthy timeline. Using a risk-based “Plan, Do, Check, Act” model, the reviewers reduced non-core data elements by 1,360 data points (42%), and monitoring efforts focused on 1,556 critical data elements essential for a regulatory filing. Protocol optimization, risk identification, and analysis of case report form data fields resulted in a $19 million cost avoidance for the pivotal trial with savings linked to fewer on-site visits, translating into reduced travel costs and resource demand.

As this case study illustrates, reducing the volume of data collected and defining performance metrics to monitor performance are pivotal to improving processes, lowering costs, and providing the groundwork for risk-based management.

 

Foundation for risk management and assessment

Competitive and regulatory pressures are pushing risk and compliance to the forefront of operations, forcing stakeholders to expand use of metrics to benchmark their performance. In 2013, both the European Medicines Agency (EMA), and the Food and Drug Administration (FDA) released documents on greater acceptance of risk-based approaches to monitoring, starting from the beginning of a trial.3,4 The EMA Reflection Paper states that the identification of priorities and potential risks should start at a very early stage, as part of the basic trial design process. Similarly, the FDA guidance notes that sponsors should be prospective about identifying critical data and processes, and understanding the risks that could affect data collection and performance of critical processes. (Table 4).


Both regulatory documents comment that the degree of risk is predictable, and, therefore, should be anticipated. Resources should be devoted to mitigating those risks to better protect the well-being of study volunteers.  

Consultant Fay concurs that there is a known degree of potential risk. “There are things we encounter in every trial, namely issues related to informed consent, site performance, compliance, time to data entry. Other factors are unanticipated, and the key is to design a risk management plan that addresses risk indicators as they arise over the life cycle of the study. The idea is to identify and prevent likely sources of risks that are critical, particularly the ones that could sideline the research,” she remarks. For example, if a site contracted to enroll 10 patients, but after three months, it has not enrolled a single patient, whereas other sites are on track, something is clearly wrong. Did the site lack the correct study population? Was the site unprepared to perform the protocol?  

A formalized approach to risk management and assessment aligns with processes developed by MCC. The industry group has worked with sponsors, CROs, central labs, and electrocardiogram and imaging core labs to define an array of performance and operational metrics that serve as the underpinning of risk assessment, mitigation, and management planning. Specifically, MCC has established a peer-vetted set of standardized performance metrics-time, cost, and quality measures-that measure performance throughout study start-up, conduct, and close-out. This approach can lead to establishing industry benchmarks from which organizations can compare their performance.

MCC proposes starting early with the following:

  • Assessing protocol risks during protocol development to mitigate protocol design-related risks

  • Conducting a risk assessment of the study plans and near final protocol prior to study conduct to identify risks, mitigate and/or assign appropriate levels of resources to high priority risks

  • Establishing plans for responding to risks, and using results to continuously improve the quality of future studies (Figure 2)

 

 

Moving toward a risk-based approach

In considering why performance metrics should be standardized, Fay observes, “As technology has moved forward, everyone has been looking at dashboards to spot information such as delayed enrollment, inconsistent data compared to other sites, and issues of non-compliance. But, this information alone is hardly sufficient to help stakeholders be proactive about planning for mitigation and resolution. It’s more effective to establish a process to identify the drivers of performance for each study, and measure performance in a standardized way,” she explains. This predictive methodology is a major departure from the traditional methods of checking all the same boxes, study after study, without regard to the relationship of those boxes to a particular study, or how they work together to flag and mitigate risk. Specifically, all of the drivers of performance need to be identified and weighed for their contribution to performance.9
 
Much like the two families hoping to have a safe trip while driving through the desert, they must anticipate what could happen, assign values as to their likelihood, and decide whether to mitigate or monitor those risks. They need the right data, not too much data.

With regulatory pressures for improved monitoring and better risk assessment, stakeholders are scrambling to comply and are doing so by expanding use of technology. Fortunately, electronic solutions are facilitating the flow of data and performance metrics into analytic reporting tools that help answer important questions about trial risk, study progression, and vendor and site performance. Industry is looking at standardization to optimize efficiencies in clinical trial management while making better utilization of resources. The ability to measure these changes and take early action go to the heart of where the industry is heading.
 

Linda Sullivan is CEO and Co-Founder of Metrics Champion Consortium; email: lsullivan@metricschampion.org

References

  1. Meeker-O’Connell A. Magdalena Borda M. Little JA, Sam LM. Enhancing quality and efficiency in clinical development through a clinical QMS conceptual framework: Concept paper vision and outline. Therapeutic Innovation & Regulatory Science. 2015;49(5):615-22. Available at: http://dij.sagepub.com/content/49/5/615.full.pdf+html?ijkey=jPbPW3xr0j2Xw&keytype=ref&siteid=spdij. Accessed May 4, 2016.
  2. MCC Risk Assessment & Mitigation Management Tool: Facilitating a structured approach to risk management. Metrics Champion Consortium. 2015.
  3. European Medicines Agency. Reflection Paper on risk based quality management in clinical trials. November 2013. Available at: http://www.ema.europa.eu/docs/en_GB/document_library /Scientific_guideline/2013/11/WC500155491.pdf. Accessed November 28, 2015.
  4. Guidance for Industry: Oversight of clinical investigations - A risk-based approach to monitoring. Food and Drug Administration. August 2013. Available at: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatory Information/Guidances/UCM269919.pdf. Accessed November 28, 2015.
  5. Planning, independence, feedback keep global R&D projects on track. Tufts Center for the Study of Drug Development. Impact Report. September 1999. Vol.1.
  6. Piazza R. Dequantify yourself. Are all those system metrics your friend or foe? Contract Pharma. November/December 2013. Available at: http://aws-mdsol-corporate-website-prod.s3.amazonaws.com/Contract-Pharma_201311.pdf. Accessed November 20, 2015.
  7. Howley MJ, Malamis PB. Quality drivers in clinical trials. Applied Clinical Trials. July 14, 2015. Available at: http://www.appliedclinicaltrialsonline.com/quality-drivers-clinical-trial-conduct-0?pageID=4. Accessed May 5, 2016.
  8. Arveson P. The Deming Cycle. Balanced Scorecard Institute. 1998. Available at: https://balancedscorecard.org/Resources/Articles-White-Papers/The-Deming-Cycle. Accessed December 2, 2015.
  9. Fay MF, Eberhart C, Hinkley T, Blanchford M, Stevens E. A structured approach to implementing a risk-based monitoring model for trial conduct. Applied Clinical Trials. December 15, 2014. Available at: http://www.appliedclinicaltrialsonline.com/structured-approach-implementing-risk-based-monitoring-model-trial-conduct?pageID=1. Accessed November 27, 2015.
  10. Howley MJ, Malamis PB. Clinical trial performance measures you can use (…And believe). Outsourced Pharma. October 10, 2013. Available at: http://www.outsourcedpharma.com/doc/clinical-trial-performance-measures-you-can-use-and-believe-0001. Accessed December 2, 2015.
  11. Taylor NP. The clinical trial technology sector could top $5B by 2018, report says. FierceBiotechIT. January 26, 2014. Available at: http://www.fiercebiotechit.com/story/clinical-trial-technology-sector-tipped-top-5b-2018/2014-01-26. Accessed November 23, 2015.
  12. Lamberti MJ, Kush R, Kubick W, Henderson C, Hinkson B, Kamenji P, et al. An examination of eClinical technology usage and CDISC standards adoption. Therapeutic Innovation & Regulatory Science. 2015;49(6):869-76.
  13. Hammer M. The Agenda: What every business must do to dominate the decade. Crown Business. 2001.
  14. Dorricott K. Using metrics to direct performance improvement efforts in clinical trial management. The Monitor. August 2012. pp. 9 –13.
  15. Getz KA, Stergiopoulos S, Marlborough M, Whitehall J, Curran M, Kaitin KI. Quantifying the magnitude and cost of collecting extraneous protocol data. American Journal of Therapeutics 2015;22(2):117-124.
  16. Alsumidaie M. Conquering RBM: A story on Medtronic's risk based management methodology. Applied Clinical Trials. October 14, 2014. Available at: http://www.appliedclinicaltrialsonline.com/conquering-rbm-story-medtronics-risk-based-management-methodology. Accessed November 30, 2015.
Related Content
© 2024 MJH Life Sciences

All rights reserved.