OR WAIT null SECS
CER funding boosts agency informatics initiatives that promise to modernize research and review processes
To realize the potential of more personalized medicine that can target patients most likely to benefit from a therapy—and those slated to fail—researchers and regulators need better information on subpopulation responses. Such data may come from the huge volume of clinical data submitted to the Food and Drug Administration, but hidden in paper records or in diverse electronic data formats that are hard to access and impossible to compare.
FDA has been struggling for decades to shift from an inefficient paper-based regulatory process to an all-electronic environment that can better assess study protocols and market applications, eventually linking up with a national electronic health records system. Many of these informatics projects now fall under the umbrella of the Regulatory Science Initiative championed by commissioner Margaret Hamburg. FDA's Strategic Priorities plan for 2011 to 2015 highlights scientific computing as key to advancing patient-centered outcomes research to better understand which interventions are most effective for specific patients.
FDA desperately needs a modern scientific computing system to deal with an overwhelming volume of data from a multitude of sources, advised Vicki Seyfert-Margolis, FDA Senior Advisor for Regulatory Science and Innovation, at the FDA/DIA Statistics Forum in April. Compounding the problem is an "informatics Tower of Babel," where each research community speaks its own "scientific dialect," making it impossible to do cross-study reviews. Electronic data capture, she emphasized, is vital for integrating pre-marketing studies with post-marketing safety data.
As FDA moves "into a world that is data rich," observed Robert O'Neill, Director of the Office of Biostatistics at the Center for Drug Evaluation and Research (CDER), the agency needs rules for how to look at data and consensus on how to deal with missing data, with cardiovascular risks, and with ever more meta-analyses that take an enormous amount of time to assess.
While sponsors now transmit about 70 percent of clinical data electronically to FDA, 30 percent of case report forms still are filed on paper, pointed out ShaAvhree Buckman, Director of CDER's Office of Translational Sciences, at the March DIA/FDA Computational Science meeting. In addition, more than one-third of new drug applications are not submitted in a fully electronic format, and most investigational new drug applications come in on paper.
Multiple IT Projects
To encourage e-submissions in formats that are searchable and comparable, FDA issued draft guidance last December that instructs sponsors and clinical investigators on how to capture, use, and archive electronic data to ensure that it is reliable, traceable, and of high integrity. Agency officials would like to require sponsors to submit study data electronically, as is now the case in Europe, and are building an infrastructure that will facilitate e-submissions and clarify standards. Because it takes years to issue new rules, the agency may encourage e-filings by offering faster and more predictable reviews of applications that meet standards; this may be spelled out in a revised Prescription Drug User Fee program, which is up for renewal next year.
FDA now is moving forward on several e-data initiatives due to an injection of about $20 million to support analysis of data for comparative effectiveness research (CER). The money comes from the Agency for Healthcare Research and Quality (AHRQ) as part of the $1.1 billion provided for CER by the American Recovery and Reinvestment Act of 2009. FDA received a tiny portion of those funds to tap its vast archives of study reports on drug effects and patient responses that could be valuable in answering questions about appropriate treatments for patients.
A key FDA CER initiative is to implement the Janus clinical trials data repository to store and manage preclinical and clinical data on biomedical products. FDA formed a partnership with the National Cancer Institute (NCI) several years ago to develop Janus as a way for research scientists and FDA reviewers to share information for development of cancer therapies and personalized medicine. Now Janus is moving into the implementation phase, with projects to support validation, loading, and management of standard clinical trial data. Janus will be hosted at NCI through 2013, but then may have to find a new home.
Because populating the Janus data warehouse requires datasets in a common language and format, the CER initiative will help "jump start" efforts by FDA to build a modern scientific research computing infrastructure that supports premarket review and postmarketing surveillance, Seyfert-Margolis explained. By leveraging the CER investment, the agency expects to enhance its capacity for managing adverse event reports, inventories of regulated plants and products, and a vast amount of research data on drugs and medical products.
A related project involves converting some of FDA's massive volume of legacy clinical trial data to a uniform electronic format and common language to facilitate meta-analysis across multiple studies. FDA centers for drugs, biologics, and medical devices have identified some 100 study data sets for conversion, focusing on AHRQ's 14 priority disease areas for CER research. Octagon Research Solutions and Texas-based ScenPro are managing the conversion of data to a standard format that can populate the data warehouse.
All these projects require standards for collecting and analyzing clinical trial information, something that FDA has been working on for 15 years with the Clinical Data Interchange Standards Consortium (CDISC). This collaboration has established the study data tabulation model (SDTM) for analyzing data, plus other standards for animal toxicology studies and for analysis of results. FDA now is working with CDISC to transition to Health Level 7 (HL7) study data standards. This is being tested as part of the Janus project, with a goal for FDA to receive all study data in HL7v3 messages by 2013.
A CDER Data Standards Program Board is overseeing these and other standards projects and will weigh new regulatory requirements for e-submission of study data. The Board is also developing an inventory of data standard needs for regulatory operations, with an eye to avoiding development of divergent standards by outside organizations.
To evaluate how different data analysis methods and clinical trial design strategies may affect comparative analysis, FDA's Partnership in Applied Comparative Effectiveness Science (PACES) is launching several pilot studies. CER study 1, for example, will test an analytical framework for subgroup analysis by comparing angiotensin-converting enzyme inhibitors to other antihypertensive therapies for the prevention of cardiovascular mortality in older women. The broader objective is to assess the value of pre-specifying subgroups and analytic protocols in evaluating heterogeneity of treatment effect, subgroup analysis, and ability to detect interactions.
A second CER project will assess the benefits and risks of thiazolidinediones compared to sulfonylureas and metformin in treating individuals with type 2 diabetes. In the process, the study will evaluate whether an analytical hierarchy process can support decisions involving both quantitative data and subjective input, and how well this approach can deal with uncertainty about long-term safety and support decisions.
A design-oriented study will develop statistical methods and software that enables investigators and regulators to determine best trial designs and analyses for generating evidence about treatment effectiveness in different patient subpopulations. Issues to consider are whether the relevant subpopulation is known or unknown before the study starts, and if those patients are few or greater in number. One objective is to develop clinical trial sequential designs for interim analysis related to changes in sample size, in randomization probabilities, and in subpopulations sampled. Similarly, another design strategy study will develop models for randomized controlled trails that incorporate historical control data with a concurrent control.
Future studies may assess methods for defining post-marketing safety signals, evaluate true responders for clinical trials, and examine methods for cross-trial comparisons. PACES is headed by Johns Hopkins University researchers, with support from the Center for Medical Technology Policy, The Lewin Group, and Buccaneer. In addition, because FDA will always have very limited resources to support such informatics and research, agency officials are looking for more collaborations in this area with research organizations and industry.
Jill Wechsler is the Washington Editor of Applied Clinical Trials, (301) 656-4634 email@example.com