OR WAIT null SECS
While steady progress has been made in recent years, CROs and sponsors still have decisions to make on which solutions are best for them.
This article will briefly review what’s meant by the concept of a digital clinical trial protocol and provide a framework for the wide range of approaches and software solutions in this space. It will also introduce a simple taxonomy for protocol design and authoring solutions to help sponsors venturing into this area.
As summarized by the International Conference on Harmonization’s (ICH) M11 Guideline,1 the clinical protocol describes the processes and procedures directing the conduct and analysis of a clinical trial of medicinal product(s) in humans. From an operational and regulatory perspective, the protocol is the defining document that sits in the center of every clinical trial.
There has been great progress over the past several years on adoption of the TransCelerate Common Protocol Template,2 with the majority of TransCelerate member pharma companies and many other sponsors aligning on a consistent, high-level structure for protocols. Most protocols, if not all, are represented in a document format, with the “source of truth” for a given Protocol contained in either a Microsoft Word (.docx) or Adobe Acrobat (.pdf) file.
Acknowledging that many of the descriptive elements in the Protocol reappear in other study documents including the Statistical Analysis Plan and the Clinical Study Report, TransCelerate went one step further and created the eTemplate Suite (eTS) of document templates. These automate the re-use of several Protocol elements in those downstream documents.
Aside from these limited capabilities, any reusable content, process, or system that needs to be set up according to study-specific specification requires a user to refer back to the protocol document and effectively copy-paste that information into downstream products.
The eTS showed the potential to automate downstream use of information derived from the study protocol. What if we could leverage other protocol information directly from a digital representation of the protocol, rather than having to extract that information via intermediary work products?
TransCelerate’s Digital Data Flow (DDF) partnered with Clinical Data Interchange Standards Consortium (CDISC) to create a new Unified Study Definition Model (USDM) data standard that will help move the industry toward a common information model to represent protocol information.3,4 ICH will finalize M11 later this year, and CDISC is making every effort to make sure the next release of the USDM (R3) will be comprehensive enough to represent all or most of the final CeSHarP (Clinical, electronic, Structured, Harmonized Protocol) template that M11 sets out to define. And finally, the HL7-FHIR Vulcan Accelerator was initiated to lay the groundwork that will help connect clinical research to clinical care. CDISC will partner with HL7 to build on the foundation set by the CeSHarP template, guideline and technical specification. While CDISC’s USDM will provide the content model, HL7-FHIR will provide the exchange standard to help the vision become reality.5
In 2020, TransCelerate sponsored a hackathon that informed its later approach on DDF. Starting at that time, and continuing through my work with several biopharma sponsors and software vendors, I have become aware of many efforts, software tools and platforms being developed to support a digital representation of the Protocol.
While many sponsors and vendors talk about achieving a “digital protocol” they are often referring to different types of applications to meet different objectives. Judging from the number of sponsors and software solutions addressing this challenge, the potential of a fully digital protocol is widely appreciated. It is a good time to visit what’s meant by the term, and to articulate the different use cases that such a solution could support.
When sponsors or vendors talk about a digital protocol, or a protocol design tool, or a protocol authoring tool, they are usually referring to one or more of three types of functionalities.
First, is the ability to reuse protocol content elements, including shorter elements like objectives, endpoints, eligibility criteria and study procedures, as well as longer narrative components, either throughout the clinical trial lifecycle, or across other studies or clinical programs.
Second is the ability to structure specific protocol elements and apply data standards, usually to support automated workflows, such as the automated configuration of clinical systems like electronic data capture (EDC) or laboratory management systems (LMS). This type of system is often connected with metadata repositories (MDRs) to enable robust connections with data standards (CDISC, LOINC, and others).
Finally, some systems provide metrics and predictive analytics to help protocol designers optimize design choices as they configure critical portions of the protocol, such as the schedule of assessments or the eligibility criteria. A user designing a study protocol will benefit from metrics related to patient or site burden, per-patient cost, or analytics that connect to real-world data sources and help predict patient recruitment or diversity, equity, and inclusion.
The idea of being able to reuse different components of content across documents isn’t particularly novel. In pharma, this is often done for promotional materials, for example.
In the clinical development area, the potential is considerable. Several elements that are defined in the study protocol reappear in other documents, often verbatim or with only a change in verb tense.6 In addition, many content sections also require translation either into foreign language versions or into lay language representations for patient-facing materials (such as the informed consent form) or patient registries.
In some cases, narrative sections may be further broken down into component parts to allow accurate reuse of repeated elements, such as study indication, design details or intervention names.
Like the data standards-driven approaches described below, these systems also imply a need for the different content components to be managed. If the idea is to have reusable content segments, some form of content governance is normally required to decide which segments get added to repositories for reuse.
The storage formats used by these systems vary from relational databases, NoSQL databases, JSON, XML, or others. The choice of format depends on the size and complexity of the components that need to be managed, and the need for interoperability and scalability. Finally, while some of these technical formats are “open” (e.g., DITA XML) some of these can be proprietary or vendor-specific formats.
The traditional workflow to configure EDC and other systems with study-specific details often starts with a Data Manager or systems specialist going through a MS Word or a pdf document to translate scheduled assessments into system requirements. This often involves connecting structured elements from the protocol, particularly from the schedule of assessments (SOA), with the relevant data standards. In the case of EDC, most data collection is configured according to CDISC’s CDASH data standard, and then with the relevant data collection forms or database specifications.
Representing that SOA digitally as an “electronic SOA” (eSOA) from the start obviously saves that transcription or translation step. If the information model underlying the digital protocol tool already associates assessments and other structured protocol elements with corresponding standards and data collection forms, then several downstream steps can be automated. Enabling this workflow automation use case was the initial vision of TransCelerate’s Digital Data Flow initiative.
Robust governance of data standards and repositories of important protocol elements will support this approach and improve its efficiency over time.
Many analytics providers, CROs, and study sponsors are making excellent progress on metrics-informed study design and optimization. Whether it’s understanding patient and site burden, predictive feasibility and site selection, or measuring and understanding predictors of study inclusivity, the potential to use metrics to guide clinical scientists and clinical operations users is compelling.
While this type of capability is neither an authoring solution nor a designer per se, it can add considerable value to the process of building a digital version of a protocol. Some of these systems have an interface that allows users to configure an eSOA or eligibility criteria to model study elements and connect with their metrics engine. This implies that the component concepts (e.g., assessments) that are incorporated into a study design need to be connected to the metrics of interest, as part of the underlying information model. Once the design stage is completed, some version of the configured eSOA (e.g., as a JSON object) would then be incorporated in a protocol document created in another system.
Of course, a few emerging products offer a balance of two or more of these three main functionalities. As this class of tools evolves, I expect to see more tools offering a solid balance of all three.
Most of the systems that are currently available are designed as transactional systems to design or author a study protocol (and downstream data collection systems or documents). From a record-keeping perspective, the digital protocol is still rendered as a document (i.e., docx or pdf) for storage in the trial master file (TMF) or regulatory information management (RIM) systems.
There is certainly much to gain by having the digital representation of the protocol as the “source of truth” for an ongoing clinical trial, and we hope to see this evolution soon. The protocol presents unique challenges in terms of version control and its longevity. A clinical trial can take years to complete and during that time, it will usually undergo many amendments, including country-specific amendments for multi-national trials. Because of different approval timelines across countries and sites, it is quite routine to have many amended versions of the protocol running concurrently. In addition, the requirements to validate a system of record rather than a more transactional design tool would be significantly greater. So far, these technical challenges, and the volume and complexity of the content in protocols, have deterred software providers from achieving a fully digital representation as the source of truth for the protocol.
Progress toward a digital protocol has been steady and accelerating in recent years. Thanks to the conversations started by TransCelerate, and built upon by ICH, regulatory agencies, and number of competent software providers, excellent options have already emerged. The types of use cases that the available options address can be confusing without a framework to guide the selection process.
The most important first step before a sponsor or CRO embarks on this journey is to understand what they desire to achieve from a digital protocol. Is it content reuse, workflow automation, or analytics-driven study design and optimization? Once you understand which of these you would like to optimize for, at least at first, identification and selection of candidate tools will be much more efficient.
Todd Georgieff has worked in drug development for over thirty years. He has extensive experience in clinical operations, therapeutic area leadership, and oversight and partnerships with service providers. He served as TransCelerate Program Lead for Roche and led projects on protocol digitalization and clinical trial matching. He is currently providing strategic consulting services to sponsors, industry consortiums, software companies, and venture capital firms.