OR WAIT null SECS
Now that FDA is enforcing its Electronic Records, Electronic Signatures rule many pharmaceutical companies are reconsidering their data systems.
Responding to enormous pressures to shorten time-to-market and reduce overall R&D costs, pharmaceutical companies have been making significant investments in technology. Reliance on technology to speed clinical trial data collection and analysis is growing. Increasing use of technology and FDAs increased regulatory scrutiny make it critical to comply with 21 CFR 11 requirements when creating, using, and maintaining electronic records and signatures.
This article will specificaly discuss the issues faced when addressing 21 CFR 11 for clinical trials datathat is, how to secure and manage clinical trial data in a manner that requires no changes to workflow and is unobtrusive to personnel working in clinical research, biostatistics, and regulatory affairs. The real-world case study analyzes the way a mid-tier biopharmaceutical company implemented a standardized solution to secure their file-based data.
A compliance challenge
Before 21 CFR 11 became effective, clinical data was simply recorded and maintained in accordance with predicate rulesin this case good clinical practice (GCP). The process was generally paper-based, with forms for recording subject data. These forms would ultimately be archived and maintained as original records. As the use of technology grew, it became common to transcribe this data and store it electronically in the form of data files. While the data capture and subsequent retention process was usually done in a controlled manner, it would not meet the stringent requirements imposed by 21 CFR 11. Thus, the Electronic Records, Electronic Signatures regulation motivated many companies to revisit the way data is generated, collected, recorded, reviewed, approved, and stored.
As part of this analysis, companies have realized that they possess an amazingly complex clinical trials technology architecture spread across clinical sites, contract research organizations, clinical laboratories, and sponsors. Although many people assume that most data exists in secure, large-scale relational databasessuch as electronic data capture, clinical data management, and adverse event systemsthe fact remains that the majority of electronic subject data continues to be file-based. This stems from a variety of factors, such as the continued use of legacy systems that have an underlying file-based backbone, statistical analysis programs that rely on file-based queries and reports, paper-based CRFs that are subsequently scanned for ease of information transportability, and the need to transfer data between noncommon systems, using a common transport vehicleflat files.
The abundance of file-based data complicates the move toward compliance because files by their very nature are designed to be easily accessible to users and applicationsa concept that runs counter to the 21 CFR 11 requirement of tight controls on the security and accessibility of data. Compounding this problem is the fact that electronic data files are used for different purposes in different ways by different groups throughout the clinical trial process. This requires a careful balance between the needs of the business and the requirements of the FDA.
Data in electronic format
An additional challenge in controlling data that exists in the form of a file is the wide variety of locations where these electronic records exist so that both humans and systems have access to them. For example, many companies import the bulk of their subject data into a clinical data management system (CDMS) from an electronic data capture system (EDC), leveraging file-based extraction and manipulation tools. The files may be stored locally without version control. The biostatistics department may then use additional file-based analytical tools (SAS, Excel, or custom-developed) to process and analyze file-based subject data that then been extracted from the CDMS. These files are usually copied to unprotected working folders on either a local hard drive or a file server. Following this highly regulated analysis phase, report files need to be managed consistently with the 21 CFR 11 regulation combined with numerous other file-based reports that are part of an NDA submission. Each of the files requires the appropriate controls, including
As a result of the number of ways that these clinical data files are analyzed and manipulated, many firms began their quest for compliance by mistakenly looking for technology to solve individual problems. The solutions often either address specific parts of the regulation or focus on a specific component of the data files life. Because vendors of commercial software frequently address compliance for data that exists withinbut not beyondtheir applications, implementing point solutions may introduce add-itional regulatory challenges such as multiple copies of records, incomplete audit trail information, and security loopholes. At best, the result is a number of 21 CFR 11 point solutions with large gaps between them.
People who address compliance throughout the entire life cycle of this critical data should reach the logical conclusion that the regulation requires that they eliminate, or at least minimize, all windows of opportunity that permit tampering with data. In fact, failure to address the continuous protection of data may be worse than failing to address the regulation at all, because it doesnt provide end-to-end compliance traceability and can lead to a false sense of compliance security.
A specialty pharmaceutical company asked us to help make their clinical trials data 21 CFR 11compliant. To that end, we found their data handling to be typical of many organizations.
To ensure that the pharmaceutical company could leverage its existing software and processes and, at the same time, secure regulated file-based data for its entire life cycle in accordance with the regulation, we needed to implement universal and centralized storage to allow efficient access to all types of files.
Our analysis led us to implement a solution that consisted of a centralized database with the characteristics of a file server. That approach allowed us to address compliance by using a single repository throughout the entire life cycle of the recordand it enabled the company to retain its existing software, procedures, and data in its original format. That eliminated the need to contend with copies of files residing on local hard drives. The solution included the following features: server-based installation, centralized Oracle database, and zero footprint client computer access.
Server-based installation. All software components reside on a common application and database server; all file transactions use existing data transport protocols. Server-based installation also allows implementation and use without modifying or disrupting existing infrastructure and processes.
Centralized Oracle database. All files and metadata reside in an Oracle database. This enables optimized and enhanced search functions, and it provides functionality necessary to support the regulatory requirementsfile versioning, audit trail capabilities, and complete life cycle protection, including archiving.
Zero footprint client computer access. No software is installed on any client computer, eliminating many traditional software installation concerns. It also streamlines the configuration of source applications, which reduces the need for revalidation.
To ensure the successful deployment of the solution, we used an inventory of existing data and software to build an implementation plan that provided an overall picture of what needed to be done and how to do it.
Although the scope and detail for any project will differ, the categories below are a representative sample of the steps to be performed during the planning and implementation process.
Document the requirements. As with any new process or technology, it is important to understand and document the expected outcome. Because the desired outcome of 21 CFR 11 compliance has been adequately defined for many years, previously developed requirements documentation needed minimal revisions. The requirements also addressed side benefits not related directly to compliance, such as minimizing changes to source applications and broad applicability of the solution.
Assess and purchase server hardware and software. The existing data was used to determine the disk capacity and processing requirements of the server hardware and Oracle software. This exercise was accomplished jointly by infrastructure and database personnel.
Install and validate the server-based software. After all required components were purchased, the hardware and software were installed and documented using predeveloped IQ/OQ scripts. IQ/OQ scripts are often supplied by product vendors.
Train users. An important step in any implementation is to ensure that users understand how the software works. For this implementation, it was equally important to make sure that all users were sufficiently knowledgeable about the regulation.
Migrate existing data. The inventory process identified all files used to support the clinical trial process. Because the files originally resided across a collection of computers, and there were many duplicate files, it was neither advantageous nor necessary to migrate all of them. It was important to systematically identify the files to be migrated, where they would be stored, and when the transfer would take place. As always, it was important to document the results.
Configure source software applications to work with the server-based solution. Each application generating or using the data stored in the compliant repository had to be looked at to determine whether any configuration changes were warranted. The solution implemented typically required minimal configuration of the source applications. The most common change was the reassignment of the default folder paths for the creation or reading of files.
Test and document everything. An important component of the implementation was to develop and execute test plans and scripts and to document the results. Other areas documented were design, configuration, and summaries of activities and results.
Because clinical trial data must be maintained for many years, it was also critical that the solution allowed data to be archived to durable media such as CD, DVD, or network storage devices while maintaining 21 CFR 11 compliance. The archiving process selected for this pharmaceutical company secures the data in such a way that it can be restored using a file digest or unique fingerprint, ensuring that the restored file has not been tampered with while outside of the active file repository. Additionally, after a file has been archived, the associated metadata continues to be searchable, allowing the software to be used as a long-term data management solution that enables historical datato be restored and reanalyzed when required.
The role of the regulation
As the clinical trial process continues to migrate toward a completely integrated electronic model, careful consideration must be given to the role that 21 CFR 11 will play. Many companies are currently taking the first step by evaluating their current technology architecture, which is often mainly file-based. The key to demonstrating compliance for this type of regulated data lies in minimizing the windows of opportunity for modifying data when it is not under audit trail controls.
Maintain files under tight 21 CFR 11 control for their entire life cycle. Address security, audit trails, and archiving throughout the records retention period.
Implement a centralized file repository that addresses the requirements of the regulation without disrupting existing workflow.
Ensure that there is no impact to the overall process so that existing users and applications, such as analytical tools, can continue to access the files.