Preparing Specialty Lab Data for FDA Submission in the New Regulatory Environment

Mar 12, 2018

Incorporating biomarker and specialty lab data, along with pharmacokinetic (PK), safety lab, and clinical data for Food and Drug Administration (FDA) submissions is becoming more common, as it provides robust insights on key clinical objectives, including pharmacological effects, and the safety and effectiveness of a drug. However, it also presents a significant challenge to drug developers as they operationally prepare data that will comply with the new regulatory requirements for submission.         

While in years past there was some flexibility in how data could be submitted to the FDA, any study that began as of December 17, 2016 must now use the appropriate FDA-supported standards, formats, and terminologies specified in the FDA Data Standards Catalog for New Drug Application (NDA), Abbreviated NDA, and certain Biologic License Application (BLA) submissions. These new specifications include the use of the Clinical Data Interchange Standards Consortium (CDISC) standards for Study Data Tabulation Model (SDTM), Standard for Exchange of Nonclinical Data (SEND), Analysis Data Model (ADaM) and Define-XML, as well as CDISC Controlled Terminology.

The challenges and steps involved in transforming specialty lab data into CDISC-compliant datasets that conform to FDA data exchange standards become apparent when considering a phase 1/phase 2 dose escalation and expansion study of an immuno-oncology (IO) compound. In this type of trial, drug developers would be managing a variety of data, including safety lab data (hematology, chemistry), PK lab data, and specialty biomarker assays such as high-content flow cytometry, multiplex cytokine panel, and gene expression measured by Nanostring.  

Dealing with safety lab data and PK data has become routine in clinical trial operations and there are both standards and technological solutions for managing these types of lab data. On the other hand, managing specialty biomarker data for FDA submission is still in its infancy—with gaps in both standards and technologies.

Managing Lab Data—Use of Technology and the SDTM Workflow

For context on the challenges of processing specialty lab data and transforming these data into CDISC compliant datasets, the typical workflows for local labs and PK data are illustrated in the chart below that compares the differences in data processing (eg, technology-driven vs manual) and SDTM mapping (eg, single, well-defined domain vs multiple, more complex domains). Of particular importance is the use of electronic data capture (EDC) technology and lab management tools in the more established workflow for local labs. By extension, technology engineered for specialty lab data could have a similar impact on the industry’s ability to deliver specialty lab data under the same regulations in an efficient, quality manner.

 

Challenges to Handling Specialty Biomarker Data in SDTM Workflow

There are 3 main challenges to handling specialty biomarker data within the SDTM:

Complexity in biomarker assays. To enable mapping of raw data into SDTM, extensive processing is typically needed that requires in-depth understanding of the biological assay. For example, Nanostring technology outputs reporter code count (RCC) files that require sample level checks (RNA integrity, field of view ratios, and binding densities), background correction, and normalization to obtain useable gene expression values.

Lack of structure in biomarker data upstream of SDTM mapping. Generally, the source biomarker data are delivered in disparate file formats with inconsistent structure across assays and datasets. This is further compounded by the lack of standards across labs. All of this combined makes it difficult to standardize downstream programming pipelines.

Meeting submission timelines. Delivery of submission-ready datasets for downstream use is a time-sensitive component of activities post-database lock. Simply adding the handling of complex, often “messy” biomarker data within the traditionally rigid, process-driven SDTM workflow without consideration to new ways of dealing with these data is a recipe for failure. Historically, specialty lab data have been out-of-scope of the rapid turnaround delivery schedule post-database lock, but this is no longer the case.

Applying New Technologies to Managing Specialty Lab Data and SDTM Mapping

Biomarker data being submitted to the FDA will be subject to FDA data exchange standards for regulatory submission, making it critical to organize these data effectively and efficiently as part of the end-of-study activities. Additionally, biomarker data are often used to support on-study decisions. Given this dual role, development of a robust end-to-end solution for managing biomarker data needs to consider regulated objectives, such as SDTM programming for analysis and submission, as well as provide flexibility to meet on-study needs, such as data visualization and reporting for safety review, data monitoring, and decisions on maximum tolerated dose.

Similar to how EDC technology helped revolutionize clinical data management, a technology-based solution for biomarker data management is now required to meet the needs of modern clinical trial operations. However, technology alone is not enough to achieve success in biomarker data management; it also depends on biomarker subject matter experts and associated biomarker data management processes that provide a rigorous, agile biomarker data management system for clinical trials. 

This new model harmonizes disparate sources of biomarker data and stores them in a centralized database for more effective on-study and downstream use. From that point, experts with knowledge of specialty labs and CDISC standards can map biomarker data to the correct SDTM domain. This approach, as illustrated in the chart below, produces downstream efficiencies through more effective variable mapping and development of reusable macros and codes. Beyond enabling timely delivery post-database lock, it also provides for maximal use of biomarker data to inform decisions throughout the study by providing clinical trial professionals and sponsors with centralized, on-demand access to biomarker data.

With biomarkers being an integral part of modern clinical trials, it is necessary to bring new approaches to the management and processing of biomarker data. Combining advanced technology with biomarker expertise provides flexibility, efficiency, and compliance. Addressing the fundamental gap in clinical trial operations and translational research with this approach, in parallel to traditional clinical data management, will result in immediate gains and mitigate risks to interim and final study deliverables.

 

Jared Kohler, PhD, Senior Vice President, Translational Informatics and Biometrics, Precision for Medicine.

Angela Quigley, MS, Manager, Biostatistics and SAS Programming, Precision for Medicine.

Tobias Guennel, PhD, Senior Director, Translational Informatics and Biometrics, Precision for Medicine.

native1_300x100
lorem ipsum