Clinical Study Audits: The Quality Management Approach

, ,
Applied Clinical Trials, Applied Clinical Trials-08-01-2022, Volume 31, Issue 7/8

Advancing risk-based auditing one step further by identifying the underlying process deficiencies to enable improved corrective and preventive actions.

Over the past decade, auditing of clinical trials evolved to a risk-based model. We took it to the next step by implementing a quality management (QM) approach in our audit methodology. Our audits focus on areas of risk that have an impact on patient’s safety and ethics and study data reliability, but, in addition, aim to identify the underlying process deficiencies. By focusing on which process is impacted and the deficiencies it leads to, we increased the efficiency of the corrective action and preventive action process (CAPA), which specifically can focus on why that process breakdown occurred and how it can be mitigated. We assess both the quality system at the investigator or vendor sites, as well as the quality system at the sponsor. By applying the QM approach, we assess whether the process deficiency occurred either in the planning stage, the execution, or the verification step. In this article, we will illustrate the benefit of this approach. The methodolgy has led to clearer audit findings that are impactful and to a more effective CAPA, resulting in sustainable process-level improvements.

The evolution of clinical study auditing

The approach to a clinical study audit has evolved over recent years. While traditionally, the audit frequency was driven by audit cycle, and the audit scope would encompass all aspects of a process from start to end, there has been a strong encouragement from regulators to prioritize when and what to audit in a risk-based way.1

With the introduction of risk-based monitoring, there was much emphasis placed on leveraging resources where they could bring the highest benefit to an organization. This concept did not only apply to the conduct and monitoring of clinical studies, but also to the audit and health authority inspection approaches. For instance, the planning of audits is now driven by risk assessments, based on criteria to assess operational, regulatory, and compliance risks. This risk computation helps to prioritize what to audit when and focus on areas of highest or increased risk. Also, the scope of the audit itself is determined by an analysis of data, information, and signals to focus on the areas of highest risk. We want to focus the auditors’ attention to those aspects of the study that impact patients’ safety and ethics and data credibility, but also the potential areas that demonstrate highest risk based on the internal analysis of data and documents. The risk-based approach brought great benefit to the organization, as it brings the focus on areas of increased risk.

Why is further change needed?

We wanted to take risk-based auditing a step further and implement the risk-based process thinking also in the way audit findings are identified and formulated. This enables us to go beyond reporting the non-conformance, but pointing to the deficient process that is leading to this deviation. By substantiating that with the non-conformities it leads to, we are in a much stronger position to help the stakeholders understand why change and enhancements are needed.

Let us illustrate that with the following example:

For more than 10 years, protocol non-compliance and issues with source documentation at clinical investigator sites has been among the top three findings of FDA2 and most companies’ internal audit function. An audit finding that the clinical study site deviated from the protocol often leads to restrictive actions with a limited span, such as re-training the site.

It is, therefore, much more relevant to dig deeper and assess which process is deficient (see Figure 1 below). It may be that the instructions in the protocol are unclear and lead to misinterpretation by clinical site personnel. It may be that the communication to the site staff and training failed. It may also be a site-related issue that prevented them from implementing the protocol as written. When auditors identify and report the underlying process gap, the CAPA for this audit finding will be targeted at the quality management system (QMS) level and have an impact across study sites and studies.

How to implement a quality management approach to audit

To implement quality-management thinking in the identification and formulation of audit findings, we applied the methodology developed by William Edwards Deming.3 The Deming cycle is a continuous quality improvement model that consists of a logical sequence of four key stages: Plan, Do, Check, and Act (PDCA). However, we can also implement this to guide an auditor’s assessment of a process to identify where process improvement is needed.

We use the Deming cycle at the level of the investigator site, study, and project to assess the process at each of those levels (see Figure 2 below). At each one, we train our auditors to ask themselves questions to determine whether the identified deficiency stems from a failure in the planning stage, execution, or oversight. The following are some examples of the questions our auditors ask themselves to make determinations where the QMS leading to protocol non-compliance failed—either at the planning, execution, or control stage:

  • Assessment of the QMS at the investigator site.
    • Was there something that went wrong in the way this study was set up at the clinical study site? (Failure in Planning stage)
    • Was the execution of the study at the clinical study site poor? (Failure in the Do phase)
    • Was there control /oversight by the principal investigator to identify what was going wrong? (Failure in the Check phase)
    • Were actions initiated to correct this error taken and effective? (Failure in the Action phase)
  • Assessment of the QMS of the sponsor:
    • Were the planned tools and instructions that the sponsor offered sufficient to ensure a successful execution? (Failure in Planning stage)
    • Is the monitoring of the clinical study site by the sponsor done in an effective way? (Failure in the Do phase)
    • Were performance indicators in place to identify this risk? (Failure in the Check phase)
    • How effective are the controls put in place by the sponsor to prevent / mitigate the risk? (Failure in the Action phase)

As part of the assessment, the auditor considers the risk. Risk is defined by assessing what would be the consequences of the occurrence of this failure and how likely this failure is to reoccur. This question on the intrinsic risk of the failure is important to determine the grading of the audit finding (e.g., critical, major, minor), as we want the stakeholders to focus remediation efforts on the areas of highest risk.

Let’s demonstrate how this audit approach can be implemented with an example:

  • An auditor identifies significant discrepancies across patients between the source data in the patient’s medical records and the data entered in the electronic case report forms (eCRFs). The first step is to look to which datapoints these discrepancies pertain and whether this can be linked to one specific datatype and what the impact of these errors is on data completeness and reliability. In this case, discrepancies noted were related to medical history, adverse events (AEs), and concomitant medications. The majority, though, pertained to AEs and in particular their relatedness and outcome (e.g. AEs listed as “continuing” in eCRF, but they are all noted as “resolved” in the source). Across the discrepancies, the errors in AE reporting are of the highest risk, as they affect the collection of data to determine the safety profile of the investigational medicinal product (IMP). This is the area the auditor will focus on. Subsequently, the auditor will look at possible failures in the quality system at the audited investigator site.
    • Plan: Was the study site team member doing the eCRF entries qualified to do so?
    • Plan: Did the site have a process to enter AE information and assess its progress?
    • Do: Were all sources considered when assessing the AEs, such as nurse notes, patient diaries, etc.?
    • Do: Were expectations for eCRF entry clearly provided in sponsor ‘s documents?
    • Check: Was there oversight on the site staff doing entries?

In addition, the auditor will assess if there were breakdowns in the sponsor quality system.

  • Plan: Was source data verification (SDV) set up and executed efficiently to identify similar issues?
  • Do: Did the monitor assess the site process to capture, document, and report AE, as well as quality oversight?
  • Do: Were expectations for eCRF entry clearly worded in eCRF instructions?

This way of auditing allows us to identify which process can benefit from a thorough root-cause analysis during the CAPA process to drive process improvements both at the investigator and sponsor level, and can potentially have impact across the study.

What does it require to make this approach successful?

The QM approach to formulate audit findings can further reach its desired effect of more effective CAPA and continuous improvement at the process level, if the sponsor, partner organization, or others have a mature quality culture. This implies that the stakeholders are open to think end-to-end across departmental boundaries.

Global process owners (GPO) are individuals who own an end-to-end process across functional siloes and geographic- and business-unit boundaries. Especially in a global operating model, this role can be instrumental in aligning the organization and defining the expectations in procedural documents. The GPO will also follow external trends and internal changes to the system and organization that may impact the process and support adaptation to internal processes and capabilities as needed. As such, the GPO is an excellent partner to assess what changes would be potentially needed if the deficiency affects the planning/design phase of the process or its control mechanism to identify failure.

Those that lead the functional teams would have to address audit findings that point to a deficiency in execution.

Key to success is also the understanding that this approach does not imply that auditors perform the root-cause analysis. At the beginning, this was the biggest challenge—to teach this approach to auditors and distinguish process-driven approaches to audits from the root-cause analysis. Investigator site staff and sponsor staff will still need to do an investigation and evaluate “why” the failure happened. Was it because of an aspect related to people, such as qualifications or training or availability, or rather related to absence or deficient process, or because of malfunctioning equipment or systems? There are always multiple causes, and the 20/80 rule of Juran4 will have to be applied to identify the 20% of causes that inflicts 80% of the errors.

Measure of success

This change in mindset takes time to implement. Since using this methodology, we were able to obtain more robust CAPA because the written findings were focused on where in the process the deficiency occurred and its impact. In this way, there were less discussions on a finding, and we gained better acceptance of the finding, as it focuses on a process rather than an individual or example. The owner of the finding can immediately drill down during the root-cause analysis why the error occurred and define the appropriate corrective and preventive action(s) at a process level—as we identify which process had a failure and the respective GPO to drive the CAPA process.

Meaningful results

This approach has proven to be very motivating to our auditors who are recognized as experts in process thinking and promote end-to end and risk-based thinking. It also results in more meaningful discussions with the auditees, pointing to the risks and impact, educating all involved in thinking critically, and uncovering findings of higher impact that lead to improvements that go across sites, the portfolio, and studies. At the investigator level, as the actions implemented are mostly to improve a process, it helped sites to improve the way they conduct clinical research beyond the one study that was audited. This leads to less data queries and rework, as they evolve to building quality into their processes.

Kristel Van de Voorde, Vice President, Clinical Trial System and Auditing QA, RDQ, Roberta Ferreira, Director, North and South America, Clinical TrialSystem and Auditing QA, RDQ, and Patricia Dewaele, Director, Europe and Asia, Clinical Trial System and Auditing QA, RDQ; all with Bristol Myers Squibb

References

  1. “Reporting of audit findings in line with their relative risk level” source Guideline on EMA GVP – Pharmacovigilance audits; “risk-based monitoring ...focus on the important and likely risks to critical data and processes” source FDA risk-based approach to monitoring; “risks might be acceptable if they have limited impact on subjects’ safety and right as well as data integrity and reliability” source EMA reflection paper in risk-based quality management in clinical studies
  2. BIMO inspection metrics https://www.fda.gov/science-research/clinical-studys-and-human-subject-protection/bimo-inspection-metrics
  3. Deming cycle https://deming.org/explore/pdsa/
  4. Juran Pareto rule https://www.juran.com/blog/a-guide-to-the-pareto-principle-80-20-rule-pareto-analysis/