Building Quality into Study Startup

Article

With the availability of workflow-based study start-up tools, proactive planning is within reach for stakeholders who view this function as pivotal to improving clinical trial quality.

Upfront planning drives process improvements that boost quality of the eTMF and the study overall

Benjamin Franklin is often credited with this wise warning: If you fail to plan, you are planning to fail. When it comes to study startup (SSU), and site activation in particular, these words ring true, especially as the clinical trials sector embraces planning as key to boosting study quality. With the availability of workflow-based SSU tools, proactive planning is within reach for stakeholders who view this function as pivotal to improving quality, as measured by audit-readiness and the likelihood of passing regulatory audits.

Planning works by getting it right from the beginning-prior to study activation-and requires sponsors and CROs to identify what is needed to reduce risk by determining:

  • Which countries will be used

  • Which sites/investigators/sub-investigators will be used

  • Which artifacts need to be identified and structured

  • What documents will be needed

Too often, these issues are not hammered out upfront, resulting in problems brewing, but not being identified until much later, after completed documents, artifacts, and metadata have already been released to the trial master file (TMF) or the eTMF. This poses a significant challenge, as measured by performance metrics that signal bottlenecks or breakdowns in study execution. For example, one performance metric that determines compliance suggests that regulatory quality assurance should occur four weeks after site activation.1 With this timeframe, problems such as missing or incomplete documents may go unnoticed until this late date, when the study is already well underway.

A better strategy is to employ processes that take an upfront approach to preventing or mitigating problems associated with document completion. Research suggests that workflow solutions that start from the beginning of the clinical trial-with developing the study package, for example-facilitate a quality assurance process, which can build in risk avoidance 21 weeks earlier, well before study activation, allowing for insight much sooner.2

Using this workflow-based approach to SSU, study quality improves through greater adherence to timelines, and ultimately, by the percentage of artifacts flowing into the TMF that meet quality standards. This leads to an improvement in quality as the study activation portion of SSU generates an estimated 40% of all TMF artifacts in the study lifecycle.3

This article focuses on the growing link between planning and study quality, and how building it into SSU is fundamental to better study performance. And downstream, it enhances audit-readiness through greater accuracy of study documents and artifacts that are defined upfront and eventually flow into the eTMF.

Emphasis on quality

With research showing a continually stagnating timeframe for conducting clinical trials4 and a trend toward overhauling study performance5, quality improvement is moving center stage. As a starting point for this wide-ranging effort, quality has been defined as the ability to effectively and efficiently answer the key performance question(s) about the benefits and risks of a medical product or procedure while ensuring protection of human subjects.6

 

Numerous initiatives have emerged that emphasize quality in a big way. Each initiative serves a different purpose, but generally, they focus heavily on process improvement. One example is Metrics Champion Consortium (MCC), which has several existing and planned quality-oriented working groups. For instance, the Study Quality Trailblazer Team helps member organizations set an example for the rest of the industry by demonstrating that investing time and resources upfront can yield higher quality clinical study performance at a lower cost than fixing quality issues as a study unfolds.7 The Trailblazer Team recently released a white paper, which uses data from the Tufts Center for the Study of Drug Development (CSDD) to document that study quality is actually on the decline despite major advancements in technology over the past 20 years, often due to issues that are preventable.8 In addition, SSU has been cited as a major cause of long cycle times, which have stagnated for two decades.9

There is also the Clinical Trials Transformation Initiative (CTTI), which offers a Quality by Design (QbD) approach to clinical trials meant to identify errors that could jeopardize both patient safety and the ability to obtain reliable results and meaningful information about the trial. QbD defines quality in clinical trials as” the absence of errors that matter to decision making”.10 Landray et al report that applying QbD principles to clinical research involves stakeholders ensuring that a quality management plan has been developed alongside the protocol and prior to study activation.11 Moreover, critical indicators of trial quality should be assessed on an ongoing basis so corrective actions can be made early.

Complementing industry-wide initiatives, this accent on quality is being played out through the growing volume of research on the subject. The Tufts CSDD has reported site activation, in particular, as being sorely in need of improvement. They measured cycle times for various portions of SSU, and found that study activation, defined as the period of time from site initiation to first patient in, was deemed the most inefficient, with a coefficient of variation of 1.412,13. When the coefficient of variation, a measure of spread or variability relative to the mean, exceeds 1.0, this indicates high variability.14

Wide variability in study activation could be due, at least in part, to a lack of standardized processes and little or no upfront planning. Research by Kleppinger and Ball highlight the need for a quality systems approach from the planning stages of a clinical trial.15 In particular, they state that standards have the most value and are most effective when implemented from the start. Moreover, they stress simplifying protocols and the number of desired outcomes, and suggest using validated instruments and definitions.

As the industry turns its attention to better planning, regulatory bodies are spearheading efforts to ensure study quality, most notably via the November 2016 release of the first new Good Clinical Practice guideline (GCP) in twenty years.16 Put forth by the International Conference on Harmonization (ICH), the guideline, known as ICH-GCP E6(R2), includes a section dedicated to risk-based Quality Management, and states that the sponsor should implement a system to manage quality throughout all stages of the trial process, including the beginning. This section addresses topics such as Critical Process and Data Identification, followed by sub-sections that describe risk factors, namely risk identification, risk evaluation, risk control, and more.

 

Planning with workflow-based tools

Across the industry, proactive planning for improved clinical trial quality is in the early stages, but with the availability of workflow-based tools for site selection and for guiding sponsors and CROs through SSU, process changes are starting to take root. Figure 1 shows a country-specific workflow.

Through workflows, it is possible to launch the planning process by structuring artifacts specific to study activation. This facilitates the exchange of data among systems, such as electronic data capture tools, the clinical trial management system, and databases of principal investigators. With this capability, any and all needed documents can be defined. This is a major first step because on average, more than 400 artifacts within those documents can be structured using a workflow-based tool in accordance with a company’s standard operating procedures. From this group, an estimated 60 artifacts will ultimately flow into the TMF or eTMF, representing only the final ones, such as the completed clinical trial agreement (CTA), for example, which is composed of numerous sub-artifacts, including contract language, indemnity, confidentiality agreement, data privacy agreement, and budgets.

Importantly, artifacts and documents can be created 17 weeks before site activation, making it possible to ensure the quality of these artifacts and associated metadata downstream, facilitating audit readiness at the site level (Figure 2).2 Making this process change can yield significant improvements to study execution. Specifically, the regulatory quality assurance process, which should occur four weeks after site activation, means that there is a 21 weeks lag between the development of the artifacts and documents and the regulatory quality assurance review. This can be eliminated with the use of upfront workflows, which provide stakeholders with insight months earlier.

This approach allows stakeholders to implement the necessary process changes to become best-in-class performers as evidenced by better cycle times and substantial improvements to TMF quality, i.e., fewer errors, and data that are more easily retrievable. Too often, this is not happening today, as many companies still rely on homegrown systems that lack the capacity for upfront planning, as well as data exchange among e-solutions, and audit readiness.

The TMF Performance Metrics Working Group of MCC is working to address these industry-wide shortfalls by creating output that will define basic performance metric sets followed by advanced sets. Going forward, the working group will provide its members with an array of performance-based features to gauge TMF quality. These include a TMF artifacts mapping tool, critical success factor and key performance questions, a TMF assessment schedule, and numerous other tools.

 

Linda Sullivan, MCC Co-founder and President, remarks, “As organizations implement IT solutions to support various clinical trial processes, you would expect their TMFs to improve, but a significant amount of upfront planning is needed. We suggest organizations follow this easy to remember T-M-F pneumonic. T refers to taking the time to establish expectations about quality as well as what should be in the TMF. M refers to measuring the TMF by comparing portions of the TMF against the expectations throughout the study, and F refers to fixing problems quickly to make for a more efficient process.”
 
For a long time, Sullivan has been encouraging stakeholders to engage in risk assessment efforts from the start, instead of relegating it to a costly afterthought.17 Similarly, short videos about TMF quality, released by Pfizer, make this point.18 After interviews with regulators, internal stakeholders, and customers, the gathered intelligence was distilled down to three basic factors that define TMF quality, namely completeness; timeliness; and document quality, i.e. are all documents accurate, retrievable, and properly indexed. To achieve this level of quality on a massive scale, involving millions of documents, an all hands-on-board approach is needed from the beginning.

Quality and performance links

Performance metrics are gaining traction because of their ability to track how a clinical trial is unfolding. But because performance metrics measure performance after the fact, they need to be part of a proactive planning strategy to maximize their benefit. This involves defining the documents and artifacts that will ultimately improve study quality. A significant part of this effort entails devising workflows that will keep the study on track and will also yield complete and accurate documents and artifacts that will eventually flow into the TMF.

This focus on quality is a major predictor of study conduct as it is key to reducing risk and helps sites position themselves for audit readiness. While building in quality can result in TMF improvement, it houses completed documents, meaning it lacks the ability to track a study in real-time or near real-time. To address this issue, planning for quality needs to start at the beginning. And for SSU, a complex part of the clinical trial process that contains an estimated 40% of TMF artifacts, planning can make a strong impact on improving study quality.

By Craig Morgan, Head of Marketing, goBalto

References

  1. Metrics Champion Consortium. TMF Metric Initiative. 2017
  2. Study startup around the world: A preliminary view from goBalto. ChromoReport. March 2017. Available at: https://www.gobalto.com/chromoreport-mar2017. Accessed April 18, 2017.
  3. Trial Master File Reference Model. Available at: https://tmfrefmodel.com/2015/06/16/version-3-released/. Accessed April 19, 2017.
  4. Getz K. Assessing and addressing site identification and activation inefficiencies. Tufts Center for the Study of Drug Development. March 2016.
  5. Envisioning a transformed clinical trials enterprise in the United States: Establishing an agenda for 2020: Workshop Summary. Institute of Medicine. 2012. Available at: https://www.ncbi.nlm.nih.gov/books/NBK114671/#ch1.s6. Accessed April 19, 2017.
  6. Definition from October 2008 presentation on CTTI by Dr. Rachel Behrman, CTTI Co-chair and then Associate Commissioner for Clinical Programs, FDA. Available at: https://www.fda.gov/downloads/drugs/developmentapprovalprocess /smallbusinessassistance/ucm303954.pdf. Accessed April 26, 2017.
  7. Metrics Champion Consortium. MCC Study Quality Trailblazer Team. Available at: http://metricschampion.org/trailblazer-team/. Accessed April 20, 2017.
  8. MCC Risk and Quality Management Support Group. MCC Study Quality Trailblazer Team. 2016. Available at: http://metricschampion.org/trailblazer-team/. Accessed April 20, 2017.
  9. Getz K. Assessing and addressing site identification and activation inefficiencies. Tufts Center for the Study of Drug Development. March 2016.
  10. Clinical Trials Transformation Initiative. Quality by Design. Available at: https://www.ctti-clinicaltrials.org/projects/quality-design. Accessed April 20, 2017.
  11. Landray MJ, Grandinetti C, Kramer JM, Morrison B, et al. Expert Commentary Clinical Trials: Rethinking how we ensure quality. Drug Information Journal. 2012;46(6):657-60. Available at: http://journals.sagepub.com/doi/pdf/10.1177/0092861512464372. Accessed April 25, 2017.
  12. Lamberti MJ, Chakravarthy R, Getz KA. Assessing practices & inefficiencies with site selection, study start-up, and site activation. Applied Clinical Trials. August 5, 2016. Available at: http://www.appliedclinicaltrialsonline.com/assessing-practices-inefficiencies-site-selection-study-start-and-site-activation?pageID=3. Accessed April 14, 2017.
  13. A build vs. buy look at study activation.  White Paper. goBalto. 2016. Available at: https://www.gobalto.com/whitepaper_build_vs_buy_look_at_study _activation. Accessed April 24, 2017.
  14. Wicklin R. What is the coefficient of variation? SAS. November 2014. Available at: http://blogs.sas.com/content/iml/2014/11/19/coefficient-of-variation.html. Accessed September 9, 2016.
  15. Kleppinger CF, Ball LK. Building quality in clinical trials with use of a quality systems approach. Clinical Infectious Diseases. 2010. Available at: https://academic.oup.com/cid/article-lookup/doi/10.1086/653058. Accessed April 25, 2017.
  16. Integrated addendum to ICH E6(R1): guideline for Good Clinical Practice E6(R2). ICH Harmonised Guideline. 2016. Available at: http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/ Efficacy/E6/E6_R2__Step_4.pdf. Accessed April 24, 2017.
  17. Sullivan LB. Standardized metrics for better risk management: The right data at the right time. Applied Clinical Trials. August 31, 2016. Available at: http://www.appliedclinicaltrialsonline.com/standardized-metrics-better-risk-management-right-data-right-time. Accessed April 24, 2017.     
  18. Interview with Ivan Walrath, TMF Process Owner. Pfizer. 2016. Wingspan Technology. Available at: http://www.wingspan.com/project/interview-with-ivan-walrath/. Accessed April 25, 2017.
© 2024 MJH Life Sciences

All rights reserved.