OR WAIT null SECS
© 2023 MJH Life Sciences™ and Applied Clinical Trials Online. All rights reserved.
Applied Clinical Trials
Reductions in unused data will improve study performance, lower costs, and address ethical concerns.
Sponsor organizations of late have devoted considerable attention to two key clinical data-related areas: handling the challenge of missing data in clinical trials and isolating the small amount of clinical data that is generating the majority of queries and data cleaning. Given focus on optimizing drug development budgets, one very compelling clinical data-related area that has not received much attention is the amount of unused data in trials.
Kenneth A. Getz
In discussions that I've had with colleagues at meetings and based on anecdotal reports, the incidence of unused clinical data appears substantially higher than expected. Sponsors estimate that 15% to as much as 30% of all clinical data collected is not used in NDA submissions. For an average drug, streamlining the collection of data to limit and even remove the incidence of unused data represents approximately $20 to $35 million in direct drug development cost savings.
Although a highly promising opportunity, it is clearly easier said than done. At this time, few sponsors have identified the problem; even fewer have been able to isolate data elements ripe for removal.
Companies have been reluctant to tamper with protocol templates and to hold off on collecting any data that has even an outside chance of being requested by a regulatory agency or of generating insights for an investigational new drug. Still, unused data poses important operational and ethical issues that must be considered.
Regrettably little has been written about why clinical trial data is collected but not used. My colleague Michael Kahn and I have discussed primary reasons based on our experience working closely with clinical research professionals: to glean scientific insight, to conduct data comparisons, out of habit, and for job security.
Clinical research scientists seek to collect data in order to obtain a complete drug profile and to glean insights about a drug. A rich and robust source of data provides context to explain unusual or unexpected outcomes. Clinical research scientists reason that the more data collected the more fully and optimally the results can be interpreted. And, research directors and scientists look to collect more data in order to derive insights that may result in a deeper understanding about, and new directions for a product (e.g., targeted subpopulations, new indication areas). Context-setting data variables provide clinical validation and explanation for unusual results and help clinical research scientists keep the door open for identifying product opportunities.
Preclinical and early-stage clinical trials may provide some insight into known or suspected comparative relationships. But for statistical reasons, biostatisticians look to collect more data in anticipation of potential new interactions that may be observed.
Biostatisticians structure their statistical models to measure primary and secondary outcomes. They look to collect additional data elements to explain or adjust their original statistical models due to confounding variables or interactions that may be uncovered during the course of the study. Data comparisons aid in the discovery of new relationships that were not previously known or understood.
Like most development tasks and activities, standard operating procedures, templates, and internal habits guide protocol design planning, review, and approval. Some have argued that over time protocol design templates have expanded, requiring additional data elements without revising or streamlining existing ones. Others note that out of habit, research professionals like to tack on additional studies and even "pet projects" that may not be central to the original protocol. Clinical scientists view senior management delays in reviewing and approving protocol designs as invitations to collect more data.
Clinical research professionals recount numerous stories of former colleagues, most no longer employed by a sponsor, who either failed to collect critical data that in later studies was found to be relevant or that was requested by a regulatory agency late in the development process that resulted in a costly delay or even a failed submission.
Job security is no doubt a major motivation behind research professionals wanting to collect more data. And clinical research teams typically believe that the economics of collecting an additional data point or two is but a small cost relative to the total development cost of a program. Companies are unable to predict in advance what specific data will not be needed. As a result, they collect more data as an insurance policy.
In previous columns, I have discussed Tufts CSDD research documenting rising protocol complexity during the past decade and its negative impact on clinical trial performance. Protocol complexity is associated with a higher incidence of costly protocol amendments and a dramatic increase in the number of case report form pages from an average of 55 for protocols executed in the late 1990s and early 2000s to 180 pages for those executed in 2008 and 2009. Our research also shows that more complex protocols hurt patient willingness to participate and significantly reduce volunteer retention rates.
Numerous factors have contributed to growing protocol complexity and the increase in the amount of data collected. For example, the focus of investigational treatments has shifted to chronic diseases, where endpoints are more difficult and time-consuming to measure, and where more procedures may be required to demonstrate efficacy and quality of life benefit. As more is learned about disease mechanisms and surrogate markers, more needs to be done to test a drug and demonstrate its safety and efficacy.
Growing regulatory pressure to demonstrate safety has created more procedures in earlier phase protocols. And competition between drug developers has also intensified, leading sponsors to gather more data to differentiate products within markets.
A high reported incidence of unused data raises a number of critical ethical issues: Are study volunteers being exposed to unnecessary risk given that some data will not be used? Is volunteer willingness to participate in clinical trials predicated on inaccurate information about procedures that will be performed and about the expected clinical trial results? Are ethical review committees making decisions about the safety of clinical trials based on inaccurate information about data that will be collected and, ultimately, about the study risks and benefits?
Tufts CSDD research results and those published in scholarly research journals show that patient willingness to participate in trials is influenced strongly by protocol design. There are direct risks to patients associated with the collection of data from invasive or potentially harmful and dangerous procedures. And there are risks in data collected via psychiatric and lifestyle questionnaires, for example, that might jeopardize employment, insurance coverage eligibility, and social status.
Human subject protection professionals are charged with assessing participant risks versus participant and societal benefits. If unused clinical trial data poses risk that has no counterbalancing benefits, then ethical review committees have an obligation to detect and remove that risk or to reject that protocol. Yet meeting this mandate is often beyond the technical capabilities of most ethical review boards.
As a first critical step, sponsors need to internally assess the severity of unused data in their NDA submissions. Educating protocol designers is another critical step that needs to be taken. Requiring research professionals to justify the data that they want by tying it to incremental budget dollars that must be spent to obtain that data may go a long way toward streamlining protocol design, containing costs, and reducing demands on sites and patients.
Early and faster protocol design review by senior management, operating staff, investigative site, and IRB personnel may also help minimize unnecessary data. Early input from regulatory agencies would also help. However, sponsors report that although regulatory agencies are often clear about what data they don't want, they are much less clear about explaining what data they do want.
Data elements not directly tied to the study objective, the statistical plan or specific exploratory analyses should be removed from the study. Several technology solutions offer a methodical and robust approach to identify unlinked, and therefore potentially unused, data elements. These solutions include CDISC's BRIDG and the Trial Bank models, both available via the Internet.
It may be impossible to avoid having any unused data in trials. Still, substantial and potentially avoidable financial and operating costs, and ethical challenges posed by the high incidence of unused data, compel organizations to find solutions to this problem.
Kenneth A. Getz MBA, is a Senior Research Fellow at the Tufts CSDD and Chairman of CISCRP, both in Boston, MA, email: firstname.lastname@example.org