OR WAIT null SECS
Now that the industry has gotten some experience with CDISC standards, teams are aiming to get more from their standardization efforts.
Clinical research teams have been implementing CDISC standards for quite a while. Health authority requirements for CDISC compliant data (SDTM, ADaM, SEND) for preclinical and clinical data were great motivators for adoption. As the Business Case for CDISC Standards: Summary notes, “standards implemented from the beginning can significantly improve processes in a single clinical study, thus saving time and cost.”1
Now that the industry has gotten some experience with the standards, some teams are aiming to get more from their standardization efforts. Sometimes this means process changes or role changes, and sometimes it means expanding the organization’s understanding of the value standards play throughout the data lifecycle. Sometimes it requires all the above.
To truly improve clinical data processes, three important activities must happen:
As noted, most companies have bought into CDISC standards, are using them in ongoing trials, and are transforming legacy data to a compliant format. However, these same companies should consider the standards much earlier in the typical trial planning and design process to fully exploit their use.
As a protocol is being drafted, clinical operations professionals are traditionally focused on medical and scientific aspects of the document, as they should be. However, these team members are not typically trained on CDISC data standards, or how and when they are applied. Including clinical programming and stats team members in the draft reviews of the protocol and CRF can make a big difference. For example, clinical programmers can identify data collection errors that may impact trial messaging by comparing the draft CRF to the draft protocol. By addressing analysis and reporting issues early, the team can avoid later protocol amendments that require time and resources to track and manage. Early identification also gives the team more options when they try to fix any issues.
This is true at the larger clinical program level as well. These same resources can add value early in clinical program planning. A program-wide view from a stats and programming perspective ensures a more strategic approach and allows the team to take advantage of cross-trial efficiencies.
Once programming and stats are involved with protocol development, it’s time to look at the timetable for clinical programming and the frequency of access to interim data cuts. The traditional approach is to start the programming of data output several weeks prior to database lock. This serial process involves a resource spike
because timelines for providing CDISC compliant output often coincide with a pending regulatory submission. This is a very condensed and chaotic timeframe when resources are in short supply. Any issues identified at this stage have the potential to jeopardize a submission date. There are also a limited number of options at this point if a team does identify a data quality issue.
Enabling the programming team to get a jump start with frequent data cuts allows them to produce iterative versions of trial output and address data integrity issues earlier in the process. This early view can even offer insight into hidden coding issues. For example, a side effect coded as an adverse event, may in fact, be a sign of efficacy. If a patient presents with red skin, it may be considered an adverse event. However, it could also be a sign that a previous blister or rash is improving and thus needs to be recategorized.
Iterative processing allows the research team to confirm data quality prior to moving to the next part of the trial. The “early and often” approach reduces risk while also reducing the huge bolus of work that is common in the traditional approach.
With this approach, programmers can build appropriate programs that address all aspects of the evolving data sets prior to database lock. After the database lock, the final submission-ready output can be produced quickly and efficiently because most of the analysis data is already standardized and compliant. Shortening this critical timeframe will lead to quicker finalization of the clinical study report (CSR), the Integrated Summaries of Safety and Efficacy and the Label.
As research teams employ CDISC standards, include programming and stats in their protocol and CRF review process, and begin to apply the standards to individual trial data early and often, they can begin to realize efficiencies and data quality gains at the study, program, and portfolio level. These gains are greater when combined.
Efficiency is not the only benefit to this approach. The current trend toward more complex clinical trials can illustrate how these recommendations impact clinical research. Many of the benefits detailed above are multiplied in complex, multi-part trials. When a trial includes multiple dosing regimens, early data cuts can reveal safety issues that result in earlier changes to the protocol and better research.
A clinical trial is a large and complex process. Efficiencies in one area may lead to bottlenecks in another, shifting the work and the delays to a different part of the process. A more strategic approach offers greater flexibility, and utilizes skilled programmers and statisticians, to prevent issues later.
Process awareness in clinical research allows innovative teams to look upstream and downstream to apply skill sets and knowledge when and where they are most appropriate. This improvement combined with data standards, ensures that the integrity of the protocol is reflected in the trial design, and that all data collection tools are aligned with it. The standards themselves improve the data lifecycle but an optimized process combined with the standards offers the greatest value.
Mike Willis is the CEO of TradeCraft Clinical Research and can be reached at email@example.com