Amplifying Patient Voices in Protocol Design

Publication
Article
Applied Clinical TrialsApplied Clinical Trials-09-01-2021
Volume 30
Issue 9

Insights from studies on advisory boards and participation burden.

During the pandemic, patient engagement (e.g., participation convenience, safety, research relevance, ease of use) has been a guiding principle in the transition to remote and virtual clinical trial support. Given the high level of customization inherent in decentralized and hybrid clinical trial execution models, a growing number of sponsor companies are now looking in earnest to quantify the impact of patient-centric initiatives. Although the Tufts Center for the Study of Drug Development (CSDD) and others have dedicated considerable time and attention to gathering objective evidence, until recently, very little has been available.

Several years ago, the Drug Information Association (DIA) and Tufts CSDD analyzed 121 distinct case studies to quantify the impact of patient-centric initiatives. In this early study, patient engagement initiatives that offered the highest impact given the investment required were patient advisory boards, professional panels, social media engagement, and patient education programs. These initiatives were relatively simple to implement and inexpensive, yet they reportedly resulted in faster study planning, improvements in recruitment and retention rates, fewer disruptions and delays, and more positive study volunteer feedback. Unfortunately, there was too much variability in definitions, types, and quality of the metrics collected across case examples. As a result, return on patient engagement could not
be generalized.

In another study, Tufts CSDD—in collaboration with the Clinical Trials Transformation Initiative (CTTI)—developed a method for projecting the financial value of patient engagement using standard risk-adjusted financial modeling techniques. The CTTI-CSDD team assessed the impact of patient engagement on the expected net present value (ENPV) for a typical oncology development program entering Phase II or III clinical trials. For a Phase II project, the cumulative impact of a patient engagement activity (e.g., patient advisory board review) that avoided one protocol amendment and improved enrollment, adherence, and retention was an increase in NPV of $62 million, rising to $65 million by Phase III, and an increase in ENPV of $35 million, increasing to $75 million by Phase III. Depending on the scenario, a $100,000 investment in a patient-centric initiative during the design of the study could produce NPV and ENPV 500 times that amount. This ENPV increase is equivalent to accelerating a pre-Phase II product launch by 30 months or 18 months for a pre-Phase III project.

During the past 24 months, Tufts CSDD has undertaken several new studies to gather more objective and practical evidence quantifying the impact of patient engagement. Sponsor focus of late has been particularly high on protocol design given the fundamental role it plays in directing downstream clinical trial activity. Sponsors expect that the ability to incorporate patient preferences and needs during the protocol authoring and finalization period offers a unique and unprecedented opportunity to optimize study design and positively impact clinical trial performance and cost. The results of two CSDD studies underway inform this area.

Patient input on draft protocols

Patient advisory boards (PABs) usually entail soliciting feedback from a group of patients and their caregivers (e.g., patient advisory boards or input panels) on the feasibility and convenience of near final protocol designs. PABs are typically moderated meetings among six to 10 people. Although they are not enrolled in clinical trials, participants in an advisory board usually share the burden of disease investigated by the protocol permitting them to speak from direct experience. At this time, an estimated two-thirds of the top 50 largest pharmaceutical and biotechnology companies have implemented one or more PABs to obtain specific protocol design feedback.

To assess the impact of PABs, Tufts CSDD has analyzed data from a convenience sample of Phase II and III protocols conducted between 2015 and 2019 across multiple therapeutic areas. Of 153 protocols analyzed, 12 utilized PABs. Given this small sample size and the wide variation in disease conditions represented, these results should be interpreted with caution until a more robust sample has been analyzed.

PAB input resulted in simpler protocols and more targeted designs compared to those that did not receive input. PAB-informed protocols had, on average, fewer endpoints, eligibility criteria, and distinct procedures.

The number of procedures performed per visit and the total volume of data collected per patient was significantly lower for PAB-informed protocols. Protocols informed by PABs had an average of 15 endpoints and 27 eligibility criteria, 30% and 20% less, respectively, than did protocols that were not informed by PABs. Protocols that used a PAB for design input collected 2.2 million data points, 50% fewer than protocols that did not receive PAB input. And PAB-informed protocols had a lower percentage of procedures supporting tertiary and miscellaneous endpoints.

The results also indicated that protocols receiving PAB input had better relative performance and efficiency. Notable differences between protocols informed by PABs and those that were not informed include faster study initiation, shorter clinical trial duration, quicker study close-out, lower-than-planned study budgets, and fewer average number of substantial protocol amendments.

The actual cycle times of protocols that did not use PABs typically exceeded planned timelines by between 20% and 90%. Actual cycle times for PAB-informed protocols came in closer to plan (e.g., study initiation cycle time) or beat the plan time lines by 6% to 17%. And actual PAB-informed protocol budgets came in 28% below planned budgets. Protocols that did not use a PAB tended to have actual budgets 3% above plan. Protocols that used a PAB had on average one fewer substantial amendment than those that did not.

Designs informed by participation burden

The use of a patient burden assessment score or algorithm has been gaining momentum as valuable input into early-stage protocol authoring. A tool developed by the National Institute of Health several years ago measured patient attitudes and correlated them with patient self-report of the likelihood to participate in, and the intent to drop out of, a clinical trial.

Dassault Systems and IQVIA have been developing and beginning to pilot-test patient burden assessment tools (e.g., the Patient Burden Index and the Patient Friction Coefficient, respectively) among pharmaceutical companies.

In 2019, Tufts CSDD—in collaboration with ZS, a global management consulting firm, developed and tested an initial participant burden algorithm against pharmaceutical and biotechnology company protocols based on the perceptions and preferences of 591 patients living with one of five disease conditions in the US.

During the past 14 months, we’ve compiled overall and clinical trial-specific preferences and perceptions from 3,002 global patients across 39 disease conditions to derive and test a revised algorithm against a convenience sample of 266 recently completed protocols. This algorithm takes an expansive look at participation burden and includes protocol procedures, clinical trial logistics and convenience, disease burden, medication adherence, mobility, lifestyle restrictions, and caregiver reliance.

The revised participation burden algorithm is highly associated with, and predictive of, many clinical trial performance outcomes. Mean participant burden scores were highly correlated with, and predictive of, screen failure rates. This result is consistent with past qualitative studies that have demonstrated the impact of perceived burden on reducing patient willingness to enroll in clinical trials.

Mean participant burden scores were also positively correlated and predictive of treatment duration and overall clinical trial duration. These two cycle time measures are in part a function of the long-term effects of protocol demands on participant retention.

Burden scores were not predictive of study start-up (approval to FPFV) or study close-out (LPLV to database lock) durations. These latter two cycle times are less dependent on study participant experience and more dependent on clinical research professional preparedness, coordination, and efficiency.

We were disappointed to find that our enhanced participant burden score was not associated with, or predictive of, clinical trial drop-out rates. We used a broad definition of “drop-out rates” in the study that included all factors other than volunteer mortality. For this reason, drop-out rates were a function of far more than the volunteer’s decision to prematurely terminate participation. Drop-out rates included administrative factors; lack of efficacy and insufficient therapeutic response; drug safety issues and adverse events; study-specific withdrawal considerations (e.g., pre-defined discontinuation considerations and volunteers no longer meeting the study requirements); and poor compliance and protocol non-adherence.

In future applications and assessments of the participation burden algorithm, we plan to focus on drop-outs specifically associated with protocol demands and inconveniences resulting in premature termination of participation. New research is underway to apply the participant burden algorithm prospectively in protocol design. Ten pharmaceutical companies are working with Tufts CSDD and ZS to benchmark mean participant burden scores across protocols in their portfolio. Retrospective company participant burden benchmarks will be used to guide prospective protocol authoring.

In closing

The insights from these recent studies and ongoing research are intended to assist drug development teams and protocol authors in leveraging patient voices to retrospectively understand clinical trial performance outcomes and to prospectively inform protocol design decisions.

Looking beyond protocol design, Tufts CSDD and the DIA are also embarking on a new working group study in the fall of 2021 to gather robust, objective, and practical data linking organization-specific patient engagement initiatives—pre-, during and post-participation—to clinical trial and program-level performance and quality outcomes. We encourage sponsor organizations to participate.

Reach Ken Getz via email at kenneth.getz@tufts.edu

© 2024 MJH Life Sciences

All rights reserved.