Simplifying the Implementation of Electronic Patient-Reported Outcomes

Article

Applied Clinical Trials

There have been a number of significant scientific and regulatory milestones driving the adoption of electronic patient-reported outcomes in clinical trials since the first screen-based ePRO solution, Minidoc, appeared in 1980.

There have been a number of significant scientific and regulatory milestones driving the adoption of electronic patient-reported outcomes (ePRO) in clinical trials since the first screen-based ePRO solution, Minidoc, appeared in 1980. The work by Arthur Stone and colleagues in 2002 provided clear evidence of the so-called “parking lot effect”-patients completing missing entries in their paper diaries while waiting for their clinic appointment [1]. Their study used a paper diary booklet containing a light sensor which enabled the times of entries written in the paper diary to be compared to those identified by the opening and closing of the booklet. The apparent completion compliance was 90%, but based on the sensor data the parking-lot effect brought this down to 11%-with 2 patients of the 40 studied completing entries in the 3-week diary prospectively.  

This question of data integrity was addressed in the 2009 FDA PRO Guidance [2] which states: “If a patient diary or some other form of unsupervised data entry is used, we plan to review the clinical trial protocol to determine what steps are taken to ensure that patients make entries according to the clinical trial design and not, for example, just before a clinic visit when their reports will be collected.”

In parallel with the FDA guidance, the ISPOR ePRO Good Research Practices Task Force Report (2009) published recommendations on the evidence needed to support the measurement equivalence of patient-reported outcome measures (PROMs) migrated from paper to electronic forms [3]. These recommendations have been largely accepted by the industry and have been important in driving the acceptance of ePRO. Learn strategies on how to leverage the latest innovations in eClinical technology as part of a clinical trial strategy at CBI's eCOA/ePRO 2018.

The question of measurement equivalence is important, and care should be taken when migrating instruments from their original form to another format. Because of the psychometric validation work inherent in instrument development, it is important that format changes do not affect how patients respond to individual items within each instrument. The ISPOR task force recommended that for minor changes in presentation required when migrating a PROM from paper to a screen-based electronic format (such as instructional text changes-e.g., replacing “tick” or “circle” with “select”-and minor format changes-e.g., representing a single question per screen) a cognitive interview and usability test (CI/UT) should be performed. These studies, in small groups of patients, use a standardized semi-structured interview conducted by an experienced qualitative interviewer to explore whether format and modality changes might affect the way patients interpret and respond to the questionnaire items.  

In the decade since the publication of the ISPOR task force recommendations, our industry has conducted many studies to provide evidence of migration equivalence when adopting ePRO. This growing evidence base provides the opportunity to understand best practice in ePRO implementation, and to revisit the need to routinely conduct this testing when leveraging an electronic platform for PROM collection in clinical trials. In situations where a CI/UT study is not required this simplifies ePRO implementation. The conduct of CI/UT studies adds an additional component to the implementation timeline for ePRO, in addition to a modest cost. In some circumstances, this additional timeline can be a barrier to adoption if not accounted for in study start-up planning.

Industry colleagues Chad Gwaltney, Ashley Slagle, Ari Gnanasakthy, Willie Muehlhausen and I recently published a summary of this evidence base to be a catalyst to the discussion of new recommendations around demonstrating migration acceptability [4]. We summarized a number of meta-analyses of quantitative equivalence studies, a synthesis of CI/UT studies and a recent equivalence study of multiple device sizes (bring your own device-BYOD). We concluded that in the light of this evidence base strongly supporting measurement equivalence between paper and electronic formats, an expert screen review to assess the ePRO implementation against best practice guidelines should be sufficient in many cases-as opposed to the routine conduct of a study-specific CI/UT study. We also concluded that for well-understood ePRO solutions, usability evidence from similar or representative groups to the target population may often suffice. The full article is available online at Therapeutic Interventions and Regulatory Science and we anticipate that the evidence summarized will further simplify and help drive the adoption and implementation of ePRO in clinical trials.

 

[1] Stone AA, Shiffman S, Schwartz JE et al.  Patient non-compliance with paper diaries.  BMJ2002;324:1193-1194.

[2] Food and Drug Administration.  Guidance for industry: Patient-reported outcome measures: use in medical product development to support labelling claims.  2009.  Available from: https://www.fda.gov/downloads/drugs/guidances/ucm193282.pdf

[3] Coons SJ, Gwaltney CJ, Hays RD, et al. Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR ePRO Good Research Practices Task Force Report. Value Health2009;12:419-429.

[4] Byrom B, Gwaltney C, Slagle A et al.  Measurement equivalence of patient-reported outcome measures migrated to electronic formats: a review of evidence and recommendations for clinical trials and bring your own device.  Therapeutic Innovation & Regulatory Science(online ahead of print).

 

Bill Byrom, Vice President of Product Strategy and Innovation, CRF Health

© 2024 MJH Life Sciences

All rights reserved.