Molasses in Study Startup Efficiencies

Article

Applied Clinical Trials

Numerous factors can adversely impact study startup and its efficiency, in an industry plagued by rising development costs and increasing complexities.

Over the past decade study startup, encompassing the activities associated with site identification, feasibility assessment, selection, and activation, has become a priority improvement area in the conduct of clinical trials[i],[ii].

Numerous factors can adversely impact study startup and its efficiency, in an industry plagued by rising development costs and increasing complexities. Complex protocols (leading to increased difficulty in finding patients who meet the inclusion/exclusion criteria)[iii], protocol amendments[iv], competition for sites, contract and budget negotiations[v], regulatory changes and compliance for global studies[vi], IRB approvals, PI and CRA turnover[vii], and others, contribute to significant trial delays.

It’s difficult to understand why there is so much variation and why there are so many inefficiencies and delays in study startup, this was the impetus behind this research. In a recently completed comprehensive study, The Start-up Time and Readiness Tracking (START) II, 2017 conducted by Tufts Center for the Study of Drug Development (CSDD) a significant difference in cycle times between new vs repeat sites and organizations (sponsors vs. contract research organizations (CRO)) was observed, however the percentage of sites never activated remained at 11%, a figure which has not changed substantially in over a decade1. The primary reason cited was budgeting and contracting problems, which has been a challenge identified in much published work[viii]. Given the new technology solutions and practices, as well as, the increasing number of dedicated personnel managing site relationships it’s surprising and disappointing that the industry has not been about to make any headway in reducing the number of non-active, non-enrolling (NANE) sites.

Given the plethora of new approaches and solutions now being deployed to improve the study startup process the Tufts CSDD research provides a baseline upon which future studies can be conducted to gauge progress. The study was funded by an unrestricted grant from goBalto, a technology solutions provider.

The research examined a number of areas associated with study startup, including site identification, study feasibility and recruitment planning, criteria for site selection, staffing and resources, and study startup process improvements and opportunities.

Differences between respondents working at CROs or sponsor companies according to whether they had a dedicated functional group to support site initiation activities or a localized group for these activities were examined. Both subgroups were examined across a number of measures, including cycle times for site identification, site selection, activation, and new and repeat sites. Additionally, the groups were assessed on satisfaction with current processes, percentage of sites selected that are never activated, level of technology investment, and perceived time savings due to technology.

The respondents were comprised of 403 unique organizations and three-quarters were US-based. More than half of respondents worked in sponsor companies 53% or CROs 24%, with additional responses from sites, medical device companies, academic institutions.

Site identification cycle time was defined as the time taken to identify appropriate investigative sites. Site selection cycle time was defined as the time from site identification to feasibility and receipt of site qualification information to final site selection decision. Study start-up (also referred to as site readiness and site activation) cycle time was measured as the time that all initial sites (i.e., non-backup or contingency sites) are activated or from the time the site selection decision is made until all sites are initiated and ready to enroll.

The research found that cycle times were shorter for repeat sites than they were for new sites (Figure 1). Clinical operations teams typically rely on relationships with principal investigators built over time, and while the idea of using all repeat sites might seem like a logical and sure win way to speed study startup it is important to point out that research suggests for a typical multi-center study 30% of sites selected are new, of which 13% are completely new to clinical research. Institutional knowledge about sites is frequently dated and soiled within departments and may not be relevant to the therapeutic area under investigation, for example, rare and orphan disease trials often require companies to work with sites and investigators they have not interacted with in the past. Moreover, study teams are blinded to problems inherent with this approach-namely, it limits opportunities to engage with new sites that could be more effective than those familiar to the study team[ix].

According to the research companies do not use one single source of data to identify sites; a mix of non-evidence-based approaches are used including personal networks, proprietary databases, and recommendations from study teams. Although clinical research professionals recently have been seeking access to accurate site-level performance metrics to aid investigative site identification. Use of site-level data to predict enrollment may be a more attractive option for increasing the pool of evidence available to support study startup decision making[x],[xi].

Figure 1. Estimated time (in weeks) it currently takes for (initiation activity).

The greatest differences were observed between sponsor and CRO practices. Cycle times reported by CROs-in comparison with those reported by sponsors-were significantly shorter: site initiation cycle times were 5.6 weeks shorter (20%) for repeat sites and 11 weeks shorter (28%) for new sites (Figure 2). Overall, CROs report completing all site related activities 6-11 weeks faster than sponsors.

Figure 2. Estimated CRO vs. sponsor cycle times

The trend to outsource clinical trial operations to CROs is rooted in intense competition to improve productivity, driving sponsors to contain operational and infrastructure costs while completing projects better, faster, and more efficiently. These external service providers can achieve economies of scale unavailable to sponsors when they combine the volumes of multiple companies. Although the differences were not statistically significant, the research found that on average CROs dedicate 18 FTEs to site selection and 30 to activation, compared with an average of 12 and 13, respectively, for sponsors. CROs also report a lower level of site non-activation at 8.7%. One potential explanation for these results is that sponsors are relying more on CROs to manage site activity and CROs have been able to invest in more processes that would create efficiencies.

Many of the improvement areas cited involve new technologies or changes in organizational processes and require a great investment of resources and time. Despite many attempts at improvement within organizations, gains in end-to-end cycle time have not been made.

Practices intended to streamline study startup timelines include the use of technology investments to expedite the collection of clinical data and to help sponsors/CROs better monitor clinical trial performance. New technologies include predictive analytics and site forecasting for investigator identification, automated online site feasibility and site scoring system for faster turnaround time, and electronic document exchange repositories to speed up essential document collection[xii]. Many sponsors and CROs have also implemented clinical trial management systems (CTMS), electronic cloud-based solutions and online clinical document exchange portals[xiii]. Shared investigator databases are another resource that organizations are utilizing.

CRO and sponsor subgroups also differ in technology investment. On average, those working at CROs are investing about 10% more frequently in all areas of study initiation (identification, feasibility, selection, and study start-up (i.e., activation)) and more frequently invest moderately to heavily across all areas when compared with sponsors (Figure 3).

Figure 3. The extent to which respondents are investing in additional technology to support (initiation activity).

 

According to the research 80% of respondents who have invested in technology report time savings. Respondents reporting their technology is adequate have 30% shorter cycle times than those with inadequate technologies.

On average organizations with dedicated functional groups report more than twice as much investment (49% vs. 22%) than those without a dedicated function. However, the research uncovered no evidence that a centralized site identification and selection function offers a significant speed advantage. The study did reveal, however, that there are a few differences when comparing cycle times for centralized and localized groups in working with both repeat and new sites. Organizations with localized groups are slightly faster for these activities (6.2% faster for repeat sites and 3% for new sites; Figures 4 and 5).

Organizations with centralized functional groups were reported to be slightly slower for site selection and study start-up for both repeat and new sites, but slightly faster for site identification (Repeat: 3.3 vs 3.7 weeks; New: 5.8 vs 7.1 weeks).

Figure 4. Estimated centralized vs localized functional group cycle times, repeat sites.

Figure 5. Estimated centralized vs localized functional group cycle times, new sites.

Despite the gap in perceived technology investment, both groups had similar views on their technology capabilities. However, organizations with centralized functions were more than twice as likely to report large time savings due to technology (25.9% vs 12.8%) and 11% less likely to report no time savings from technology (16.7% vs 27.7%).

On average 10% of respondents reported that they are very satisfied with their study startup processes, whereas 30-40% expressed dissatisfaction. Respondents reporting that they are very satisfied have cycle times 75.5% shorter than those reporting they are completely unsatisfied. Overall, nearly 40% of respondents of respondents are still using unsophisticated methods (e.g., excel, paper-based systems), which may contribute to lower satisfaction levels.

Nevertheless, irrespective of organizational structure – expertise, experience and project management are equally important to both groups who face similar challenges and see the same opportunities for improvement.

Respondents were largely aligned on what measures would be most effective at enhancing various study startup activities. The top cited option for enhancing site identification was “pooling and sharing data on site performance” with 88.9% of respondents indicating that it would enhance the process, for site selection the top cited option was to “get better evidence of a site’s true potential before selection” at 95.7%, and for activation process enhancements “central IRB/Ethics approval process” at 94.5%.

This research presents new benchmark metrics on the comprehensive cycle time from site identification through site activation. Overall, the study startup process is still very long-5 to 6 months total duration on average-a figure that has not improved over the past decade.

There is still a pervasive need for effective solutions across the industry despite many commercially available options (IMS Study and Site Optimizer, TransCelerate Shared Investigator Platform, and Investigator Databank) and internal solutions (e.g., internal investigator dashboards and site feasibility software)[xiv].

There is wide variation and inconsistency in study startup practices within and between sponsor companies[xv]. Given the high cost, of initiating one site, which has been estimated at $20,000 to $30,000 plus another $1,500 per month to maintain site oversight, the prevalence of delays, and inefficiencies associated with study startup activity, sponsor and CROs are continually looking to improve their study startup cycle times.

The full report, subsequent mini-reports, as well as the groundbreaking START research conducted in 2012 are available for download from the goBalto Resource Center (https://www.gobalto.com/resource-center).

[i] CTTI recommendations: efficient and effective clinical trial recruitment planning. https://www.ctti-clinicaltrials.org/projects/recruitment. Accessed November 29, 2017.

[ii] Abbott, D, Califf, R, Morrison, B. Cycle time metrics for multisite clinical trials in the United States. Therapeutic Innovation & Regulatory Science. 2013;47:152–160. http://journals.sagepub.com/doi/abs/10.1177/2168479012464371

[iii] Lamberti, M, Mathias, A, Myles, J, Howe, D, Getz, KA. Evaluating the impact of patient recruitment and retention practices. Therapeutic Innovation & Regulatory Science. 2012;46:573–580. http://journals.sagepub.com/doi/abs/10.1177/0092861512453040

[iv] Peters S, Lowy P. Protocol amendments improve elements of clinical trial feasibility, but at high economic and cycle time cost. Press Release. Tufts Center for the Study of Drug Development. January 14, 2016. Available at: http://csdd.tufts.edu/news/complete_story/pr_ir_jan_feb_2016. Accessed February 20, 2017.

[v] Financial and operating benchmarks for investigative sites: 2016 CenterWatch-ACRP collaborative survey. https://www.acrpnet.org/resources/financial-operating-benchmarks-investigative-sites. Accessed August 24, 2017.

[vi] Morgan, C Restricting Regulations. International Clinical Trials. 2017 http://www.samedanltd.com/magazine/13/issue/281/article/4693

[vii] Miseta, M Is There A Solution to the CRA Shortage Problem? January 4, 2018. Clinical Leader https://www.clinicalleader.com/doc/is-there-a-solution-to-the-cra-shortage-problem-0001

[viii] Financial and operating benchmarks for investigative sites: 2016 CenterWatch-ACRP collaborative survey. https://www.acrpnet.org/resources/financial-operating-benchmarks-investigative-sites. Accessed August 24, 2017.

[ix] Morgan, C Removing the Blinders in Site Selection. Advance Healthcare Network. August 3, 2016 http://health-information.advanceweb.com/Features/Articles/Removing-the-Blinders-in-Site-Selection-2.aspx

[x] Sears, C, Cascade, E. Using public and private data for clinical operations. Appl Clin Trials. 2017;25:22–26.  http://www.appliedclinicaltrialsonline.com/using-public-and-private-data-clinops

[xi] Sullivan, L. Defining “quality that matters” in clinical trial study start up activities. Monitor. 2011;25:22–26. 

[xii] Lamberti, M, Brothers, C, Manak, D, Getz, KA. Benchmarking the study initiation process. Therapeutic Innovation & Regulatory Science. 2013;47:101–109. http://journals.sagepub.com/doi/10.1177/2168479012469947

[xiii] Morgan, C. The need for speed in clinical study start up. Clinical Leader. http://www.clinicalleader.com/doc/the-need-for-speed-in-study-startup-0001. Accessed August 27, 2017.

[xiv] Sears, C, Cascade, E. Using public and private data for clinical operations. Appl Clin Trials. 2017;25:22–26.

[xv] Schimanksi, C, Kieronski, M. Streamline and improve study start-up. Appl Clin Trials. 2013;22:22–27. 

 

Craig Morgan, Head of Marketing, goBalto

© 2024 MJH Life Sciences

All rights reserved.