OR WAIT 15 SECS
Optimizing study protocols and improving data quality have been notable areas of focus in the biopharmaceutical industry; the total number of endpoints in a typical Phase III protocol increased from 7 to 13.
Optimizing study protocols and improving data quality have been notable areas of focus in the biopharmaceutical industry; the total number of endpoints in a typical Phase III protocol increased from 7 to 13, the number of eligibility criteria rose from 31 to 50, and the number of investigative sites increased from 124 to 196, yet the total number of patients randomized declined from 729 to 597 from 2002-2012, respectively. Additionally, protocol complexity attributes towards a reduction in data quality, which requires enrolling more patients to maintain sufficient statistical confidence intervals.
Experts in clinical trial data quality and compliance are encouraging sponsors to reduce protocol complexity and focus on critical study endpoints. The RbM Consortium’s recent guidance document, Ten Burning Questions about Risk Based Study Management, advise study teams to base their protocols on candid rationale tailored to prescriber, patient, and payer needs and to include a maximum of two critical study endpoints.
Naturally, optimizing study protocols based on the aforementioned recommendations improve data quality, and study outcomes. Additionally, recent publications are demonstrating that there is a growing adoption of electronic endpoint adjudication (eAdjudication) solutions , which execute endpoint adjudication through quality controlled platforms to mitigate bias (and data variances), and are geared towards improving clinical trial data quality. This article will go further by discussing high frequency data collection via mHealth, and strategies tailored towards enhancing data quality, and reducing the amount of subjects needed to achieve statistical endpoints in a clinical trial.
Big Data Advances and Predictive Medicine in Academia
Academic research institutions, such as The Echoinformatics and Cardiac Ultrasound Research Program at Mount Sinai’s Icahn School of Medicine, are leveraging big data measurement and analysis tools to predict medical outcomes with high statistical confidence intervals and with less patients. Figure 1 illustrates heat maps generated from 29 patients on two cardiac diseases. The team at Mount Sinai utilized big data technologies to aggregate and analyze close to 2 million cardiac data points per patient. Correspondingly, these findings led to the discovery of a differentiated disease model with an 89.6% predictability rating. This model provides proof of the notion that applying big data measurement techniques towards specific endpoints can result in needing to enroll much less subjects to achieve statistical confidence levels in clinical trials.
Figure 1: Predictive Modeling in Cardiovascular Disease 
Activating Big Data and Mobile Health to Improve Clinical Trial Data Quality
The aforementioned case study suggests, that with proper measurement techniques and big data technology utilization, study teams can take a similar approach in designing clinical trial data collection techniques that result in needing to enroll much less patients and shortening the duration of total study visits in order to achieve required statistical confidence intervals for endpoint analysis.
In combination with reducing the number of study endpoints, study teams can supercharge trial outcomes by enhancing the frequency of data collection for those specific study endpoints. Study teams should consider leveraging medical measurement devices that are designed for collecting high-frequency measurements from several angles. For example, in cardiology, study teams can obtain high-frequency measurements for a variety of cardiac functions including heart shape change, blood flow, muscular constriction and imaging through echo-imaging, and other devices in order to evaluate disease progression.
Figure 2: Supercharging Studies with Big Data Measurement
Another opportunity for collecting high frequency data includes mHealth devices. Figure 3 illustrates the outcomes of traditional data collection models compared to mHealth data collection models.
Figure 3: Traditional vs. mHealth Data Collection Models
Figure 3 suggests that, unlike traditional models that sporadically collect data on every study visit, the mHealth model continuously collects high-frequency data throughout a clinical trial. This model can significantly enhance the sensitivity of the data in order to more rapidly achieve statistical confidence intervals (reducing total visits) without needing to enroll more patients. For example, continuous and high-frequency data collection models can capture minute changes in antigens when administered investigational medical product, whereas individual blood draws to test for changes in antigen levels would require numerous study visits and blood draws to observe similar outcomes.
While advances in mobile health technologies, such as Apple’s iWatch and Garmin’s Vivofit, have not yet achieved medical grade data collection capabilities, these technologies will soon catch up, and will create opportunities for quality and high-frequency medical data collection.
In the meantime, study teams can identify and utilize medical devices that specialize in high frequency data collection and big data technologies to evaluate medical outcomes, all of which enhance data quality.
Difference Between Clinical Data and Big Data Measurements
When evaluating data collection methodologies, it is important to emphasize the differences between traditional clinical trial data measurement techniques and big data measurement technologies. Traditional data collection methodologies involve manual work (i.e., measuring, collecting and recording data), and using numerous measurement devices to collect specific data points associated with study endpoints. Alternatively, big data measurement technologies include using a few devices that automate high frequency data collection (i.e., biosensors capable of measuring several vitals, or utilizing an echo-imaging device and incorporating five different viewpoints, etc.).
One Quality Angle on mHealth and Big Data Use
Dr. Peter Schiemann, Partner at Widler & Schiemann, Ltd., will elaborate on how regulators will respond to the aforementioned approach clinical trials, and a few insights on protocol optimization:
We believe that regulators would accept this approach, when you involve them from the beginning and explain to them that you plan to achieve the same statistical significance with fewer patients using mHealth and collecting higher frequency data. At the end it is the statistic that counts. It is important to note that if you take this approach, you need to submit your statistical analysis plan with your IND application. It is important to emphasize that collecting high frequency data is not the same as collecting numerous data points in complex protocols with too many endpoints.
In order to create a study with optimal endpoints, it is of utmost importance to de-complicate a protocol by asking the right questions. An effective way to create a protocol with the best value proposition involves asking a simple question, “what data do I need to collect to answer the one and only question this study is supposed to answer?” Many times, trials are overloaded with collecting unnecessary and unfocused data; we commonly hear study teams saying, “We might need this information in the future.” This approach is not helpful, since the unnecessary and unfocused data collection exercise puts more and more complexity to the trial, burden upon the sites, making them prone to mistakes, and pushes study budgets to the limits.
However, once you’ve narrowed down your study’s focus to a maximum of two endpoints, you can then consider leveraging mHealth, and high frequency data collection technologies/devices to achieve your study’s data collection and statistical analysis directives.
 Kenneth Getz, Tufts, Medidata