Developments, news and strategies for drug development specific to phase I through Phase III global clinical trial management, execution, project management and outsourcing. Go→
News, articles and issues specific to clinical trial practice and implementation at the investigative site level. Go→
Strategies and innovations related to how clinical professionals are utilizing technology internal and external to their organizations to advance clinical trials. Go→
News, articles and issues specific to laboratories role in the clinical trial, including ECG, imaging, genotyping, tissue samples and more. Go→
News, developments and strategies for clinical trials conduct in relation to the FDA, EMEA and other global regulatory authorities overseeing the drug development industry. Go→
News, articles and strategies related to clinical trial design which impact postmarketing studies, therapeutic areas, adaptive trials, statistics, protocols and more. Go→
MCC has 100+ standardized metrics spanning timeliness, cycle time, quality, efficiency, cost and risk. This month, let’s look at a quality metric that’s useful for tracking both protocol and site performance: subject retention percentage.
Why this metric is important: Keeping track of subject retention is, of course, critical to completing a study; enough subjects have to complete the protocol to achieve statistical significance. However, tracking retention percentage can also help to identify problems with the protocol or with specific sites. For instance, a high dropout percentage across all sites can point to onerous visit schedules, onerous procedures or concomitant medication problems. Retention problems at only certain sites can imply site-specific staff or training issues or even improper screening or randomization.
Definition: The retention metric is calculated as the percent of enrolled subjects who remain in the study and did not voluntarily withdraw. Involuntary withdrawals and discontinuations are not counted as withdrawals for the purposes of this metric. The metric can be calculated at the site, country, region and study level, and can be rolled up to the indication, therapeutic area or portfolio level.
How to calculate this metric: The formula is simply the (number of subjects enrolled) minus the (number of subjects who voluntarily withdraw) divided by the (number of subjects who enroll).
+2% of the planned retention percentage is a good target value. Industry benchmark data suggest that on average, across all protocol phases and therapeutic areas, you can expect an 18% drop out percentage (1).
Example: In the graph below, you can see the retention percentage for a protocol over time. By June, retention is declining significantly, and an additional decline occurs in October. A look at the graph on the right (retention by site for October) reveals the reason.
Country A is doing well, and Country B is doing fairly well. However, Country C sites are all having problems (perhaps a local standard of care issue?) and sites C3-C5 are faring particularly poorly compared to their counterparts. Investigation of Country C sites and a comparison to Country A sites is immediately in order to ascertain the root cause issues driving the high voluntary drop out percentage in Country C.
What you need in order to measure this: You need the following two things for each site (or study) at the end of each month or quarter:
What makes performance on this metric hard to achieve: Performance can be hard to achieve on this metric due to complexity of the protocol, procedures, etc., or problems at individual sites.
Things that you can do to improve performance: Once you are tracking this metric, the appropriate improvements are dependent on the trends that you observe.
Companion metrics: Other metrics that you should consider in tandem with this metric include: the MCC Protocol Quality metric and its related tracking tool, the MCC Site Quality metric and its related tracking tool, and planned vs. actual screen failure ratio.
Dave Zuckerman, CEO, Metrics Champion Consortium, firstname.lastname@example.org
Linda Sullivan, COO, Metrics Champion Consortium, email@example.com
(1) Benchmark data from Clinical Performance Partners, Inc. and PhESi – 2012.