Feature

Article

Beyond Prediction: An Evidence-Based Framework for Assessing Site Selection Decisions

Key Takeaways

  • Effective site selection is crucial for clinical trial success, yet challenges remain despite AI-driven advancements.
  • A disconnect exists between sponsors/CROs focusing on metrics and sites with local insights but limited resources.
SHOW MORE

A new systems-based framework helps sponsors and CROs move beyond prediction to understand why site selection succeeds or fails, enabling targeted interventions that boost recruitment, data quality, and trial efficiency.

Credit: photon_photo | stock.adobe.com. Clinical trial success depends on selecting appropriate clinical sites. Poor site selection can lead to delayed recruitment, compromised data quality, increased costs, and regulatory risks

Credit: photon_photo | stock.adobe.com

Background

Clinical trial success depends on selecting appropriate clinical sites. Poor site selection can lead to delayed recruitment, compromised data quality, increased costs, and regulatory risks.16

Despite technological advances in data-driven site selection using AI systems and diverse data sources,12,7,15 fundamental challenges persist in how selection processes are structured and evaluated.

Current literature exhibits a disconnect between stakeholder perspectives. Sponsors and CROs focus on quantitative indicators like meeting enrollment targets,4 while sites possess closer patient proximity and local operational knowledge but often lack resources for rigorous feasibility analyses.1

"Our framework enables systematic evaluation of whether site selection decisions deliver intended outcomes throughout the trial lifecycle. Unlike existing approaches focused on improving selection methodologies, our framework provides tools for diagnosing why selection decisions succeed or fail, identifying specific deficiencies in data interpretation and application."

Evidence of systematic problems emerges when examining performance patterns: sites selected for strong historical performance may struggle with recruitment, while underperforming sites may improve following targeted interventions.1,16 Many trials continue reporting 30%-40% non-performing sites despite recent advances in analytics and AI-based selection tools.8,12,7,15

Current approaches focus on cataloging factors believed to influence site performance through surveys and predictive models. However, these approaches emphasize prediction over explanation, limiting understanding of relationships among factors and mechanisms affecting outcomes.

We propose distinguishing between site inputs (resources and operating environment) and capabilities (how effectively sites adapt, coordinate, and problem-solve as trials evolve).

Our framework enables systematic evaluation of whether site selection decisions deliver intended outcomes throughout the trial lifecycle. Unlike existing approaches focused on improving selection methodologies, our framework provides tools for diagnosing why selection decisions succeed or fail, identifying specific deficiencies in data interpretation and application.

Methods

We developed a conceptual framework grounded in input-process-output logic from systems modeling, adapted for clinical trial site selection.10,13 This systems perspective structures site evaluation into three interconnected layers: inputs, dynamic capabilities, and outputs.

  • Inputs refer to observable, measurable characteristics including site resources and operating environment.
  • Dynamic Capabilities refer to coordinated site-level abilities governing how resources are organized and executed to achieve performance.14
  • Outputs evaluate system performance through metrics such as enrollment numbers.

We operationalized this framework using three data sources. First, we conducted a literature review using PubMed with keywords including "clinical trial" and "site selection," yielding 14 papers meeting inclusion criteria for substantive site selection discussion with substantial participant samples.

Second, we analyzed two publicly available feasibility questionnaires (THRIVE and PRISM trials) to demonstrate practical application.

Third, we conducted 30-minute interviews with 21 industry professionals (average 23.4 years of experience) representing CROs, pharmaceutical companies, and clinical sites to validate practical relevance.

Results

Framework Components

Figure 1 presents our framework applied to site selection assessment.

  1. Inputs comprise personnel factors, infrastructure elements, protocol specifications, recruitment landscape characteristics, and historical performance data.
  2. Dynamic Capabilities encompass five site-level functions spanning pre-selection assessment through post-selection management phases.
  3. Outputs represent measurable performance metrics including enrollment rates, data quality indicators, and operational efficiency measures.
Figure 1. Framework for assessing and improving site selection and engagement: Given data, the decision maker infers the site's capabilities and expected outputs before making recommendations. Ongoing monitoring enables auditability and targeted interventions.

Figure 1. Framework for assessing and improving site selection and engagement: Given data, the decision maker infers the site's capabilities and expected outputs before making recommendations. Ongoing monitoring enables auditability and targeted interventions.

The framework conceptualizes site selection as continuous and learning-oriented rather than a discrete decision point. Dynamic capabilities, which ultimately drive performance, are not directly observable during initial assessment.

Decision-makers must rely on input data to infer both capabilities and anticipated outcomes. Post-selection performance monitoring generates output metrics enabling retrospective evaluation of actual capabilities, creating feedback loops for systematic learning and proactive intervention.

Key Finding

Analysis of 14 papers revealed inaccurate feasibility projections as the primary concern, mentioned in nine studies. This creates a circular reasoning problem: when sites estimate their enrollment capacity, they perform the same complex integration of inputs and capabilities that sponsors/CROs are trying to evaluate.

We rely on sites' self-assessments of their performance potential as predictors of their actual performance potential. Even experienced trial managers err in predicting site recruitment success.1 This problem is exacerbated when feasibility questionnaires are distributed without complete protocol information, forcing sites to make estimates based on incomplete data.1

Patient recruitment challenges were next most frequent, with trial advertising, historical performance use, and protocol management ability each appearing in seven studies. Among input factors, competing trials and insufficient resources were most commonly mentioned (five studies each), indicating enrollment capability as a central evaluation concern.

Practitioner Validation

Interviews revealed site selection practices adapting to complex protocols, patient diversity emphasis, and emerging technologies. Practitioners are implementing solutions aligning with our framework's principles across three areas:

  • Enhanced Input Assessment: Organizations invest in platforms integrating multiple data sources, real-world data, and AI to create detailed site profiles. However, inputs collected during pre-selection are snapshots that can change throughout trials, potentially undermining initial selection assumptions.
  • Dynamic Capability Co-Creation: The industry shows fundamental shift from transactional relationships to collaborative site development, fostering cultural alignment and shared goals. Solutions addressing "technology overload" focus on coordination capabilities rather than adding technological inputs.
  • Real-time Output Monitoring: Growing acceptance of performance indicators for proactive site management during trials, providing iterative performance views enabling diagnostics and timely interventions rather than waiting for final outcomes.

Illustrative Example

A CRO facing enrollment delays in a large oncology trial chose to optimize existing sites' dynamic capabilities rather than expanding inputs through additional sites. Despite 40 sites having adequate inputs (qualified investigators, patient access, infrastructure), multiple sites had not screened patients in over 12 months.

A six-month intervention focused on capability enhancement: site-specific engagement plans, collaborative monitoring strategies, and targeted protocol clarification. This produced measurable improvements: inactive sites resumed screening, and overall program enrollment trajectory improved significantly, demonstrating effectiveness of capability-focused interventions over resource expansion.

Discussion and Conclusions

Our framework addresses three main improvement areas:

  1. First, widespread reliance on inaccurate feasibility projections creates circular reasoning where sites solve the very prediction challenge systematic selection aims to address.
  2. Second, traditional performance categorizations fail to account for dynamic capability nature and interaction with available inputs.
  3. Third, current data collection may generate non-actionable information that cannot inform targeted interventions.

The framework's limitations include not explicitly capturing complex relationships between CROs, sponsors, and funders influencing selection decisions, nor addressing varying decision-making authority across organizational structures. Interview data revealed sponsors may reject CRO recommendations due to budget constraints or strategic considerations operating independently of capability assessments.

To address this gap, we adapt our framework into the flowchart presented in Figure 2 to support more transparent and systematic site selection processes. This flowchart distinguishes between initial site recommendation, structured validation, and final decision points where designated decision-makers accept or reject recommendations.

This process clarifies accountability, enhances consistency, and supports downstream monitoring and learning.

Figure 2. Recommended operational site selection process using the input–capability–output framework.

Figure 2. Recommended operational site selection process using the input–capability–output framework.

By distinguishing between inputs, capabilities, and outputs, we transform factors into actionable insights, clarifying whether focus should be on providing additional resources or enhancing operational processes. This enables more targeted and effective interventions.

Organizations should systematically track non-performing site percentages as standard benchmarks for evaluating selection capabilities, defined consistently across industry and used to diagnose whether failures stem from inadequate inputs, capability assessment errors, or external factors.

The framework supports both prospective site selection and ongoing performance monitoring throughout trial lifecycles. Rather than traditional "good site" versus "bad site" categorizations, we recommend collaborative partnerships focused on capability enhancement, recognizing that both inputs and capabilities evolve during trial execution.

Funding Sources: This work was supported by the Bill & Melinda Gates Foundation, Seattle, WA (Grant Award Number: INV-080704).

References

1. Hanne Bruhn, Shaun Treweek, Anne Duncan, Kirsty Shearer, Sarah Cameron, Karen Campbell, Karen Innes, Dawn McRae, and Seonaidh Cotton. Estimating site performance (esp): Can trial managers predict recruitment success at trial sites? an exploratory study. Trials, 20, 04 2019.

2. Beau Bruneau, Kristin Surdam, Amy Bland, Amy Krueger, Andrew Wise, Ani Cotarlan, Asher Leviton, Elena Jouravleva, Grace Fitzgerald, Heather N. Frost, Honora F. Cutler, Joshua Buddle, Luis G. Diaz, Michele Cohen, Nancy A. Sacco, Ryan Washington, Susan Mauermann, Victor Chen, and Andrea Bastek. Redefining feasibility in clinical trials: Collaborative approaches for improved site selection. Contemporary Clinical Trials Communications, 40:101291, 2024.

3. Nayan Chaudhari, Renju Ravi, Nithya J Gogtay, and Urmila M Thatte. Recruitment and retention of the participants in clinical trials: Challenges and solutions. Perspectives in Clinical Research, 11(2):64--69, 2020.

4. T. Dombernowsky, M. Haedersdal, U. Lassen, and S.F. Thomsen. Criteria for site selection in industry-sponsored clinical trials: a survey among decision-makers in biopharmaceutical companies and clinical research organizations. Trials, 20:1--12, 2019.

5. David B. Fogel. Factors associated with clinical trials that fail and opportunities for improving the likelihood of success: A review. Contemporary clinical trials communications, 2018.

6. Marta Gehring, Rod S Taylor, Marie Mellody, Brigitte Casteels, Angela Piazzi, Gianfranco Gensini, and Giuseppe Ambrosio. Factors influencing clinical trial site selection in europe: the survey of attitudes towards trial sites in europe (the sat-eu study). BMJ Open, 3(11), 2013.

7. Lars Hulstaert, Isabell Twick, Khaled Sarsour, and Hans Verstraete. Enhancing site selection strategies in clinical trial recruitment using real-world data modeling. PLOS ONE, 19(3):e0300109, 2024.

8. Otis Johnson. An evidence-based approach to conducting clinical trial feasibility assessments. Clinical investigation, 5:491--499, 2015.

9. Ravindra A. Kadam, Shobhana U. Borde, S. A. Madas, S. S. Salvi, and Shivanand S. Limaye. Challenges in recruitment and retention of clinical trial subjects. Perspectives in Clinical Research, 7(3):137--143, 2016.

10. Michael Lefew, Anh Ninh, and Vladimir Anisimov. End-to-end drug supply management in multicenter trials. Methodology and Computing in Applied Probability, 23(3):695--709, 2021.

11. Diep Nguyen, Grace Mika, and Anh Ninh. Age-based exclusions in clinical trials: a review and new perspectives. Contemporary Clinical Trials, 114:106683, 2022.

12. Anh Ninh, Yunhong Bao, Daniel McGibney, and Tuan Nguyen. Clinical site selection problems with probabilistic constraints. European Journal of Operational Research, 316(2):779--791, 2024.

13. Anh Ninh, Michael LeFew, and Vladimir Anisimov. Clinical trial simulation: modeling and practical considerations. In Proceedings of the Winter Simulation Conference, pages 118--132, 2019.

14. David J Teece, Gary Pisano, and Amy Shuen. Dynamic capabilities and strategic management. Strategic management journal, 18(7):509--533, 1997.

15. Brinna Theodorou, Lauren Glass, Chenyu Xiao, and Jianyong Sun. Framm: Fair ranking with missing modalities for clinical trial site selection. Patterns (N Y), 5(3):100944, Mar 2024.

16. Colin Zahren, Simon Harvey, Lynette Weekes, Cathy Bradshaw, Raksha Butala, Jason Andrews, and Sandra O'Callaghan. Clinical trials site recruitment optimisation: Guidance from clinical trials: Impact and quality. Clinical Trials, 18(5):594--605, Oct 2021.

Newsletter

Stay current in clinical research with Applied Clinical Trials, providing expert insights, regulatory updates, and practical strategies for successful clinical trial design and execution.

Related Videos
© 2025 MJH Life Sciences

All rights reserved.