The Importance of Simple Clinical Trial Diaries

Article

Diaries are often an essential component of a clinical trial, not only for gathering patient-reported outcomes (PROs) but possibly even for determining a study participant’s eligibility. If that diary becomes complex however, this will impact everyone involved, from the sponsor and eCOA designer to the study participant and site. By understanding the ways complex diaries are problematic, sponsors can better assess if these complexities are necessary.

Sponsors who overcomplicate their electronic diary can expect delayed timelines due to the extensive design, build, and test time required from the eCOA designer. Diaries that collect excessive data can overwhelm and confuse patients, which can lead to training and support burdens for site staff as well as lead to higher dropout rates. Complex diaries can also make it more difficult for sponsors to analyze the data.

This is not to say that in some instances a complex diary is not warranted. Seizure studies, for example, can be quite complex because of the need to collect when symptoms occur, the classification of the seizures, and any rescue medication used. However, many studies do not require this level of detail. The key is to recognize when a diary is more complex than needed, so that adjustments can be made that lead to a simple, yet effective version.

Sponsors can simplify diaries by taking a closer look at the three common sources of complexity: assessment complexities, scheduling complexities, and study-level factors.

Assessment complexities

There are aspects within the diary assessment itself that sponsors should avoid. For one, no more than one question should be displayed on the screen at any given time on any device—regardless of the screen size. Not only will this single-item format keep participants focused on specific questions, but the format is considered a best practice by the ePRO Consortium and Clinical Outcomes team at Oxford University Innovation, and therefore, is an industry-wide approach.

In addition to limiting the number of questions displayed on the screen, the instructions should be as concise as possible. It is highly unlikely that study participants will read lengthy directions every day, so it is recommended to be succinct and bold key information. This will help patients absorb the most important points and submit quality responses. Irrelevant instructions, such as prompting patients to ‘select one answer' when the system is configured to only permit one answer should not be used. Irrelevant instructions, such as prompting patients to ‘select one answer' when the system is configured to only permit one answer, should not be used. Nor should instructions that are obvious via on-screen visuals. We also suggest reserving the use of ‘please’ for when additional, out-of-the-ordinary actions—like contacting the site—are required. Sponsors who overuse terms like ‘please’ will risk the words losing their meaning.

Keeping text to a minimum applies to the patient’s responses, too. Free text should not be used since patients may submit personal or confidential information. Furthermore, free text demands more time from participants to complete as well as study teams to review and clean prior to analyses and analysis of free text options is generally limited.

Excessive branching caused by a multi-select design forces study participants to consider their symptoms twice: first if the symptoms occurred and then again to rate its severity.

This format tends to result in questionable data, especially if it is not intuitive to patients that they can initially select more than one answer. Even worse, this design provides the participant with the opportunity to report no issues to bypass the follow-up questions. It is also more difficult to program and test. Instead, the recommendation is to individually present all the possible symptom items.

Scheduling complexities

For some studies, assessments must be completed for a specified number of days prior to a clinic visit. While this may be an unavoidable complexity, the biggest issue lies with the method in which the data collection is activated. For instance, triggering the diary’s availability based on scheduled visit dates is problematic since these dates often change and sites use a separate system to reschedule and can often forget to update the diary’s system. The result is diaries are unavailable when the patients need them, and sponsors do not collect the data as the protocol dictates. It is easier for everyone involved if sponsors pre-specify incremental dates throughout the study. If sponsors must have the data prior to a visit or have concerns the at-home and on-site data will overlap, the site could provide an access code to participants when they remind the participant of upcoming visits. Alternatively, sponsors can choose to make the diaries always available, however this leads to considerably more data being collected than the sponsor may find acceptable.

When considering scheduling of diaries, sponsors may struggle with deciding if the diary should be completed once a day (Daily Diary) or episodically. Episodic diaries allow multiple events to be recorded on a single day. In urology studies, this type of diary enables patients to submit when urination events occurred. This reduces burdens on study participants as they don’t have to remember each symptom occurrence at the end of the day. By permitting the data to be submitted at the time of the event, sponsors can improve data quality.

Despite the misconception that daily tasks burden participants, a daily diary is recommended for most studies as completing an assessment every day at the same time will foster a habit in study participants and improve the diary compliance. A daily diary—even if it only consists of one question—will keep participants engaged too. If weekly or monthly diaries are required by the protocol, sponsors can add these questions to the existing daily assessment or automatically launch the additional questionnaires once the daily assessment is completed.

Study-level complexities

Finally, a sponsor should steer clear of study-level complexities, including retrospective entry, and the collection of adverse events (AEs) and concomitant medications (con-meds) in the eCOA solution.

Sponsors who permit retrospective entries do so because they fear they will miss important data. Ironically, retrospective entry is more likely to lead to missing data and adherence issues as a data entry habit was not established among participants. Furthermore, it is common for participants to get confused regarding the retroactive entry’s time frame since many sponsors only customize the initial screen and do not reference the day in question on subsequent questions. In addition, by allowing retrospective entry, sponsors introduce recall bias. Recall bias occurs when patients are influenced by their current state at the time of recollection and respond inaccurately. The FDA recognises this issue and states in their PRO guidance that “items with short recall periods or items that ask patients to describe their current or recent state are usually preferable.”1

Unlike retrospective entry, every protocol requires the collection of AEs and concomitant medications. A diary, however, is not the right tool to collect this data. For one, AEs require trained clinicians for evaluation and reporting. The eCOA companies that deliver diaries do not have the necessary clinicians nor mechanisms to promptly report events as dictated by the FDA. Because of this, eCOA data collection technologies—including diaries—should not be considered the mechanism for documenting AEs.

Since a patient’s concomitant medications typically do not change during the study, capturing this data with a standard electronic data collection (EDC) system at study visits can prevent unnecessary burden on patients and sites as well as reduce diary complexity. Site staff can periodically confirm with patients at the on-site visits that the concomitant medications have not changed.

Educated sponsors who can recognize when diaries are overly complicated and understand their impact can ultimately make better informed decisions. Eliminating the common sources of complexity—assessment, frequency, and study-level factors—will improve the clinical trial experience for everyone involved. A focus on the most important data, the primary endpoints, will enhance the data quality and reliability.

Jill Platko is a Senior Scientific Advisor for Signant Health.

  1. Center for Drug Evaluation and Research. “Patient-Reported Outcome Measures: Use in Medical Product Development.” U.S. Food and Drug Administration, FDA, Dec. 2009, www.fda.gov/regulatory-information/search-fda-guidance-documents/patient-reported-outcome-measures-use-medical-product-development-support-labeling-claims.
© 2024 MJH Life Sciences

All rights reserved.