OR WAIT 15 SECS
With the availability of workflow-based study start-up tools, proactive planning is within reach for stakeholders who view this function as pivotal to improving clinical trial quality.
Upfront planning drives process improvements that boost quality of the eTMF and the study overall
Benjamin Franklin is often credited with this wise warning: If you fail to plan, you are planning to fail. When it comes to study startup (SSU), and site activation in particular, these words ring true, especially as the clinical trials sector embraces planning as key to boosting study quality. With the availability of workflow-based SSU tools, proactive planning is within reach for stakeholders who view this function as pivotal to improving quality, as measured by audit-readiness and the likelihood of passing regulatory audits.
Planning works by getting it right from the beginning-prior to study activation-and requires sponsors and CROs to identify what is needed to reduce risk by determining:
Too often, these issues are not hammered out upfront, resulting in problems brewing, but not being identified until much later, after completed documents, artifacts, and metadata have already been released to the trial master file (TMF) or the eTMF. This poses a significant challenge, as measured by performance metrics that signal bottlenecks or breakdowns in study execution. For example, one performance metric that determines compliance suggests that regulatory quality assurance should occur four weeks after site activation.1 With this timeframe, problems such as missing or incomplete documents may go unnoticed until this late date, when the study is already well underway.
A better strategy is to employ processes that take an upfront approach to preventing or mitigating problems associated with document completion. Research suggests that workflow solutions that start from the beginning of the clinical trial-with developing the study package, for example-facilitate a quality assurance process, which can build in risk avoidance 21 weeks earlier, well before study activation, allowing for insight much sooner.2
Using this workflow-based approach to SSU, study quality improves through greater adherence to timelines, and ultimately, by the percentage of artifacts flowing into the TMF that meet quality standards. This leads to an improvement in quality as the study activation portion of SSU generates an estimated 40% of all TMF artifacts in the study lifecycle.3
This article focuses on the growing link between planning and study quality, and how building it into SSU is fundamental to better study performance. And downstream, it enhances audit-readiness through greater accuracy of study documents and artifacts that are defined upfront and eventually flow into the eTMF.
Emphasis on quality
With research showing a continually stagnating timeframe for conducting clinical trials4 and a trend toward overhauling study performance5, quality improvement is moving center stage. As a starting point for this wide-ranging effort, quality has been defined as the ability to effectively and efficiently answer the key performance question(s) about the benefits and risks of a medical product or procedure while ensuring protection of human subjects.6
Numerous initiatives have emerged that emphasize quality in a big way. Each initiative serves a different purpose, but generally, they focus heavily on process improvement. One example is Metrics Champion Consortium (MCC), which has several existing and planned quality-oriented working groups. For instance, the Study Quality Trailblazer Team helps member organizations set an example for the rest of the industry by demonstrating that investing time and resources upfront can yield higher quality clinical study performance at a lower cost than fixing quality issues as a study unfolds.7 The Trailblazer Team recently released a white paper, which uses data from the Tufts Center for the Study of Drug Development (CSDD) to document that study quality is actually on the decline despite major advancements in technology over the past 20 years, often due to issues that are preventable.8 In addition, SSU has been cited as a major cause of long cycle times, which have stagnated for two decades.9
There is also the Clinical Trials Transformation Initiative (CTTI), which offers a Quality by Design (QbD) approach to clinical trials meant to identify errors that could jeopardize both patient safety and the ability to obtain reliable results and meaningful information about the trial. QbD defines quality in clinical trials as” the absence of errors that matter to decision making”.10 Landray et al report that applying QbD principles to clinical research involves stakeholders ensuring that a quality management plan has been developed alongside the protocol and prior to study activation.11 Moreover, critical indicators of trial quality should be assessed on an ongoing basis so corrective actions can be made early.
Complementing industry-wide initiatives, this accent on quality is being played out through the growing volume of research on the subject. The Tufts CSDD has reported site activation, in particular, as being sorely in need of improvement. They measured cycle times for various portions of SSU, and found that study activation, defined as the period of time from site initiation to first patient in, was deemed the most inefficient, with a coefficient of variation of 1.412,13. When the coefficient of variation, a measure of spread or variability relative to the mean, exceeds 1.0, this indicates high variability.14
Wide variability in study activation could be due, at least in part, to a lack of standardized processes and little or no upfront planning. Research by Kleppinger and Ball highlight the need for a quality systems approach from the planning stages of a clinical trial.15 In particular, they state that standards have the most value and are most effective when implemented from the start. Moreover, they stress simplifying protocols and the number of desired outcomes, and suggest using validated instruments and definitions.
As the industry turns its attention to better planning, regulatory bodies are spearheading efforts to ensure study quality, most notably via the November 2016 release of the first new Good Clinical Practice guideline (GCP) in twenty years.16 Put forth by the International Conference on Harmonization (ICH), the guideline, known as ICH-GCP E6(R2), includes a section dedicated to risk-based Quality Management, and states that the sponsor should implement a system to manage quality throughout all stages of the trial process, including the beginning. This section addresses topics such as Critical Process and Data Identification, followed by sub-sections that describe risk factors, namely risk identification, risk evaluation, risk control, and more.
Planning with workflow-based tools
Across the industry, proactive planning for improved clinical trial quality is in the early stages, but with the availability of workflow-based tools for site selection and for guiding sponsors and CROs through SSU, process changes are starting to take root. Figure 1 shows a country-specific workflow.
Through workflows, it is possible to launch the planning process by structuring artifacts specific to study activation. This facilitates the exchange of data among systems, such as electronic data capture tools, the clinical trial management system, and databases of principal investigators. With this capability, any and all needed documents can be defined. This is a major first step because on average, more than 400 artifacts within those documents can be structured using a workflow-based tool in accordance with a company’s standard operating procedures. From this group, an estimated 60 artifacts will ultimately flow into the TMF or eTMF, representing only the final ones, such as the completed clinical trial agreement (CTA), for example, which is composed of numerous sub-artifacts, including contract language, indemnity, confidentiality agreement, data privacy agreement, and budgets.
Importantly, artifacts and documents can be created 17 weeks before site activation, making it possible to ensure the quality of these artifacts and associated metadata downstream, facilitating audit readiness at the site level (Figure 2).2 Making this process change can yield significant improvements to study execution. Specifically, the regulatory quality assurance process, which should occur four weeks after site activation, means that there is a 21 weeks lag between the development of the artifacts and documents and the regulatory quality assurance review. This can be eliminated with the use of upfront workflows, which provide stakeholders with insight months earlier.
This approach allows stakeholders to implement the necessary process changes to become best-in-class performers as evidenced by better cycle times and substantial improvements to TMF quality, i.e., fewer errors, and data that are more easily retrievable. Too often, this is not happening today, as many companies still rely on homegrown systems that lack the capacity for upfront planning, as well as data exchange among e-solutions, and audit readiness.
The TMF Performance Metrics Working Group of MCC is working to address these industry-wide shortfalls by creating output that will define basic performance metric sets followed by advanced sets. Going forward, the working group will provide its members with an array of performance-based features to gauge TMF quality. These include a TMF artifacts mapping tool, critical success factor and key performance questions, a TMF assessment schedule, and numerous other tools.
Linda Sullivan, MCC Co-founder and President, remarks, “As organizations implement IT solutions to support various clinical trial processes, you would expect their TMFs to improve, but a significant amount of upfront planning is needed. We suggest organizations follow this easy to remember T-M-F pneumonic. T refers to taking the time to establish expectations about quality as well as what should be in the TMF. M refers to measuring the TMF by comparing portions of the TMF against the expectations throughout the study, and F refers to fixing problems quickly to make for a more efficient process.”
For a long time, Sullivan has been encouraging stakeholders to engage in risk assessment efforts from the start, instead of relegating it to a costly afterthought.17 Similarly, short videos about TMF quality, released by Pfizer, make this point.18 After interviews with regulators, internal stakeholders, and customers, the gathered intelligence was distilled down to three basic factors that define TMF quality, namely completeness; timeliness; and document quality, i.e. are all documents accurate, retrievable, and properly indexed. To achieve this level of quality on a massive scale, involving millions of documents, an all hands-on-board approach is needed from the beginning.
Quality and performance links
Performance metrics are gaining traction because of their ability to track how a clinical trial is unfolding. But because performance metrics measure performance after the fact, they need to be part of a proactive planning strategy to maximize their benefit. This involves defining the documents and artifacts that will ultimately improve study quality. A significant part of this effort entails devising workflows that will keep the study on track and will also yield complete and accurate documents and artifacts that will eventually flow into the TMF.
This focus on quality is a major predictor of study conduct as it is key to reducing risk and helps sites position themselves for audit readiness. While building in quality can result in TMF improvement, it houses completed documents, meaning it lacks the ability to track a study in real-time or near real-time. To address this issue, planning for quality needs to start at the beginning. And for SSU, a complex part of the clinical trial process that contains an estimated 40% of TMF artifacts, planning can make a strong impact on improving study quality.
By Craig Morgan, Head of Marketing, goBalto
Related Content:Online Extras