Bring Your Own Device for Trial Outcome Assessment

Jun 10, 2016
Volume 25, Issue 6

Using patients’ own mobile devices to collect self-reported outcomes data (referred to as electronic patient-reported outcome [ePRO] or electronic clinical outcome assessment [eCOA]) is an industry hot topic. Limited use of “Bring Your Own Device,” or BYOD, in regulatory studies to date is mainly due to industry concerns spanning two areas. The first is a concern that different device sizes and operating systems might affect the measurement properties of a PRO instrument. When employing eCOA on a single device type, the measurement properties can be assessed fully by usability testing, cognitive debrief, or quantitative equivalence studies. In a BYOD setting, performing validation studies to cover all possible device types and sizes would be impossible. The second area is concern around the technical and practical aspects of using a patient’s own hardware. Concerns, for example, include the effect of a subject changing their device mid-study, upgrading their operating system, or having insufficient storage space available to store eCOA data due to other apps, data, pictures, and music.

Between August and October 2015, we conducted a research survey to identify and assess the perceived barriers and challenges with the use of BYOD for eCOA in clinical trials. Our aim is to provide information helpful in devising future strategies for BYOD adoption, and to help identify popular perceived challenges that perhaps are more myth than reality. In preparing the survey questions, we supplemented our own knowledge of commonly considered challenges and issues with information gathered during telephone interviews of five respected industry eCOA experts.  


Survey respondents

Ninety-eight individuals accessed our survey which was promoted primarily through LinkedIn connections and groups. Of these, 19 individuals answered only the first question, a mandatory question measuring employment type, but did not answer any of the BYOD-specific questions. We excluded these respondents, leaving a sample of 79 respondents, and assume that the individuals answering only the initial question did so to proceed but then realized that they would be unable or unwilling to answer the technical questions that followed. 

Of the 79 respondents, 14 (18%) were employed at biopharmaceutical companies, 18 (23%) at contract research organizations (CROs), and 27 at eCOA vendors (34%) (see Figure 1 above; click to enlarge). For confidentiality reasons we do not report the individual organizations represented, but note that in all categories companies ranged from large to small organizations, and each contained a number of household names. The four respondents in the “Other” category included a patient advocate employed by a number of charities, a psychometrics expert, and two individuals from research institutions.

In all cases, responses collected represented personal views and not necessarily those of the respondents’ employing organizations.


Attitudes toward equivalence requirements

While the challenge of proving equivalence across multiple device types seems to dominate the public discussion regarding BYOD, our respondents seemed significantly less deterred by the equivalence challenge:

  • Overall, 44% agreed or strongly agreed that equivalence should be demonstrated on all possible devices used in a BYOD study (see Figure 2 below; click to enlarge). 
  • Only 30% of respondents disagreed or strongly disagreed that demonstration of equivalence on a single device was acceptable if access using devices of a smaller screen size or resolution could be prevented. 
  • In addition, 68% of respondents agreed or strongly agreed that showing equivalence only on a single device would be acceptable if the strategy was agreed a priori with the regulatory bodies. 
  • Few saw distinction between primary and secondary data—only 22% agreeing or strongly agreeing that demonstrating equivalence on a single device was necessary only if the data represented secondary endpoints. 
  • Seventeen percent of the respondents disagreed or strongly disagreed that no further equivalence testing would be needed if a similar equivalence study had already been conducted and reported.

Few respondents disagreed that scale author agreement would be necessary if using an existing instrument in a BYOD setting—only 17% and 5% disagreeing and strongly disagreeing, respectively (see Figure 2b). The majority of respondents agreed or strongly agreed that ensuring minimum screen size would be sufficient for valid implementation of a visual analogue scale (41% and 27%, respectively), and that differences in font sizes between devices was unimportant (44% and 26%, respectively).

There was some evidence of trends indicating differing strength of agreement based on the employment type of the respondents, although the sample was not considered large enough to assess this formally. In comparison to CROs and eCOA vendors, biopharmaceutical company respondents generally saw a greater need for equivalence demonstration across all device types, with 72% agreeing or strongly agreeing, compared to 48% among CRO respondents and 34% among eCOA vendor respondents. That said, these sponsor respondents were supportive that equivalence demonstration on a single device was acceptable if usage could be limited to devices of at least that screen resolution and size (79% of sponsor respondents agreed or strongly agreed, compared to 67% and 35% for eCOA vendors and CROs, respectively).


Concern over perceived BYOD practical or technical challenges or issues

Of the 21 perceived practical/technical challenges and issues associated with BYOD use for eCOA that we considered, few appeared of significant concern to the respondents in this survey (see Figure 3 below; click to enlarge). Over 75% of respondents identified they were “not at all concerned” or “a little concerned” about the following perceived challenges:

  • The subject could delete the app during the study.
  • The subject may fail to download an updated version.
  • The subject may upgrade their device operating system.
  • The subject may not be permitted to use their device at work.
  • There may be insufficient free storage capacity on the device.
  • The subject may become distracted by other things on the device during ePRO completion.
  • Some personal identifiable data may need to be collected
  • Data on the device could be accessed by a hacker.
  • It may be complicated to compensate subjects due to differences in individual data plans.
  • Patient setup and training may be more complicated for site staff.
  • It may be more difficult to identify that the app is working correctly on the subject’s own device.
  • Logging into the app in addition to their device may be inconvenient for the subject.

Fifty-three percent of respondents were “not at all concerned” or “a little concerned” that the subject may be able to turn off in-app notifications, such as diary reminders, using their phone settings. 

There was little concern about subjects changing their phone during a study. Seventy-four percent of respondents were “not at all concerned” or “a little concerned” about subjects changing device mid-study, 67% that subjects may discontinue their contract, and 71% that subjects may lose their device during the study.

Respondents were generally not greatly concerned about perceived security issues with using subjects’ own devices. Sixty-seven percent (67%) were “not at all concerned” or “a little concerned” that eCOA data could be accessed by other apps on the subject’s device, and 83% that data could be accessed by a hacker.

Almost 20% of respondents were very concerned or extremely concerned that subjects without a suitable device would be ineligible to participate in the study. Thirty-two percent of respondents indicated they were very concerned or extremely concerned that a subject’s device may not pair with a Bluetooth device if used in the study, with 59% “not at all concerned” or “a little concerned.”

There was moderate concern around training and support of study participants. Twenty-seven percent were very concerned or extremely concerned about the potential training burden on sites in a BYOD study, with 33% of respondents expressing the same degree of concern that site staff may be unable to troubleshoot more technical problems associated with using an eCOA app over multiple device types.

Again, while the numbers per group prohibited formal analysis, we noted some possible trends that may indicate differing strength of concern over certain perceived issues based on the employment type of the respondents. In comparison to CROs and eCOA vendors, biopharmaceutical company respondents generally appeared more concerned about subjects deleting their ePRO app during the study, subjects discontinuing their device contract during the study, subjects losing their device, data being accessible to other apps on the subject’s device or being accessed by a hacker, and the subject’s device being unable to be paired with a provided Bluetooth device.



When it comes to demonstrating measurement equivalence across all devices in BYOD settings, over half of the respondents in our survey neither agreed nor strongly agreed that testing was required on all possible devices; and over half agreed or strongly agreed that demonstrating equivalence on a single device was acceptable if all subjects could be guaranteed to use a device of at least that minimum screen resolution and size. Would that strength of feeling translate into the use of BYOD to deliver eCOA instruments in a regulatory study today? Perhaps, but maybe that’s unlikely. However, as we see more and more evidence that electronic devices of all shapes and sizes do not adversely affect the measurement properties of eCOA instruments across different study contexts and patient populations, this position may relax.

There are already positive signals that measurement equivalence across modalities is less problematic than previously thought, especially if ePRO design best practices are employed (see the C-Path institute’s ePRO consortium white paper for example).1 One of these signals is the growing evidence of paper and electronic equivalence. One might argue that the magnitude of change is far greater from paper to an electronic device, than from device to device. A recent meta-analysis by Muehlhausen and colleagues provides strong evidence of the equivalence of paper and electronic over multiple instruments, patient populations and electronic media.2 This study also included two studies in which the equivalence of two electronic formats was assessed. If patients respond consistently with PRO instruments, whether in paper or electronic form, then it seems a reasonable inference that the subtle changes across different mobile phones should not present an equivalence challenge.

This isn’t the first meta-analysis we’ve seen exploring this topic. Gwaltney and colleagues published a meta-analysis of 46 equivalence studies conducted up to 2006.3 This analysis reported a pooled correlation of paper to electronic scores of 0.90 with a 95% confidence interval from 0.87 to 0.92. This is above the correlation threshold of 0.75 or 0.8 considered to represent acceptable reliability.

Muehlhausen et al.’s meta-analysis considered new equivalence studies published from 2007 to 2013. Significantly, these studies were reported after the publication of the ISPOR ePRO Task Force recommendations on the design and analysis of equivalence studies and many, therefore, adhered to the task force recommendations. This new meta-analysis included 72 equivalence studies from 23 different patient groups and included a wide range of electronic modalities including PC, tablet, handheld device/smartphone, and interactive voice response system (IVRS). Their conclusions were in line with Gwaltney and colleagues—a pooled correlation of 0.875 (CI: 0.867-0.884). 

These two important studies provide extensive evidence that paper and electronically administered PROs are equivalent—across many different PRO instruments, patient populations, and electronic modes of administration. While none of these studies were conducted in a BYOD setting, it seems that device type does not affect equivalence to paper—so we might gain encouragement that device-to-device differences are likely to be similarly insignificant in affecting the way in which patients respond to ePRO instruments if the design of the questionnaire follows the ePRO Consortium white paper design guidelines.1

Should measurement equivalence concerns be assigned to the category of myth? We argue that the body of evidence collected to date strongly suggests this. The above pieces of work, and others actively being conducted, provide a positive signal on the way to greater acceptance of BYOD as a valid approach that protects eCOA instruments’ measurement properties when applied appropriately.

As we have seen in our survey, however, perceived issues and challenges with BYOD for eCOA are not confined to considerations of measurement equivalence. There are perceived practical and technical concerns with the use of a subject’s own mobile device to collect submission data.

Some of these concerns are tangible situations that could happen in a clinical trial. Subjects, with full control over the contents and operation of their mobile device, could indeed delete the eCOA app, prevent notifications appearing on their home screen outside the app, may upgrade their operating system during a study or change their device or mobile contract during a trial. Can these risks be mitigated and what is their potential impact on the measurement of the PRO? Certainly some could be limited through training, and additional information from system-based monitoring of the app can help to present issues for patient follow-up by sites. This might include regular receipt of information on the device operating system and version of the app being used to identify when changes have occurred, and flagging when push notification tokens indicate that notifications have been disabled on the patient’s device. 

However, eliminating the possibility of the user upgrading his or her device or turning off notifications in a BYOD setting will be hard to eliminate completely. Do these risks outweigh the potential advantages of BYOD? We argue the potential benefits of BYOD are greater than these concerns. While the cost of provisioning mobile devices in current trials is high, we do not believe that BYOD will necessarily result in significant cost savings. Some provisioning may be needed to enable inclusion of patients without compatible hardware, and provisioning savings may be balanced by a higher support cost when patients use their own mobile devices to operate study eCOA solutions. 

It is hoped, however, that BYOD brings with it greater patient convenience and centricity—enabling subjects to utilize their own smartphone to maintain their symptom diaries and instrument entries using the device they already carry with them and refer to over the course of each day. With BYOD, subjects will use a device they are familiar with and know how to use. They will also not be required to carry and keep charged a separate device solely for the purposes of their eCOA entries. Previous patient preference studies have shown that the majority of patients prefer to have a single device, and this can only benefit PRO convenience, completion, and compliance.

Some perceived issues and challenges, however, can likely be dismissed as myth—at least in the sense that they do not apply specially to BYOD and in fact apply equally to other approaches to PRO collection. Subjects can equally lose a provisioned device or paper diary as opposed to their own mobile device, and subjects may be equally unable to use a study device as opposed to a personal device in a working environment. In an unsupervised setting, subjects may be equally distracted by their own smartphone while completing a paper diary or a diary on a dedicated study device; and the same requirements for collection of personal identifiable data apply to both BYOD and provisioned device studies.

Other issues fall into the category of surmountable technical considerations, which should be addressed by good mobile app design. Security, for example, while an important concern, has not limited the prevalence of online and mobile banking services. There is no reason why we cannot learn from the application of technology solutions in other industries to gain confidence and develop solutions that are appropriate for healthcare and clinical trials. While the banking industry has the benefit of large investment, leveraging their R&D may be less expensive, and online banking has already had an impact on user behavior and acceptance of online solutions to manage sensitive information.



We believe that the time for BYOD is upon us. With that we see greater potential to apply eCOA to study protocols where paper data collection remains quite popular despite its well-known limitations. As an industry, we should continue to investigate the use of BYOD and share our findings, positive and negative, so that as a collective we can provide sufficient evidence to turn the tide. 

FDA recently requested public input from a broad group of stakeholders on the scope and direction of the use of technologies and innovative methods in the conduct of clinical investigations.4 This docket includes a specific question for comment: “What are the challenges presented when data are collected using the Bring Your Own Device (BYOD) model?” This is a positive signal from the regulators that we welcome and one that can only help to ultimately provide better understanding of FDAs position and any gaps in evidence required to make BYOD a fully endorsed approach.


Bill Byrom is Senior Director, Product Innovation, ICON Clinical Research Ltd.; Jeff Lee is CEO, mProve Health; Kara Dennis is VP, Chief of Staff, Medidata Solutions; Matthew Noble is Senior Director, Product Management, Medidata Solutions; Marie McCarthy is Director of Product Innovation, ICON Clinical Research; Willie Muehlhausen is Vice President, Head of Innovation, ICON Clinical Research



1. C-PATH Institute ePRO Consortium (2014). Best Practices for Electronic Implementation of Patient-Reported Outcome Response Scale Options.

2. Muehlhausen W. et al. (2015). Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013. Health and Quality of Life Outcomes; 13: 167-187.

3. Gwaltney C. et al. (2008). Equivalence of Electronic and Paper-and-Pencil Administration of Patient-Reported Outcome Measures: A Meta-Analytic Review. Value in Health; 11: 322-333. 

4. Federal Register (2015). Using Technologies and Innovative Methods to Conduct Food and Drug Administration-Regulated Clinical Investigations of Investigational Drugs; Establishment of a Public Docket.

lorem ipsum