Investigator Site Audit Performance

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-06-01-2006
Volume 0
Issue 0

CQA benchmark survey finds companies use a similar method to conduct and report audits.

The benchmark activity detailed in this article demonstrates the degree of harmonization in the general performance and reporting of investigator site audits within the pharmaceutical industry. The vast majority of respondents surveyed confirmed that the audit teams operate to globally standardized processes. Most companies classify the audit findings in terms of severity and they categorize them with respect to impact on specific processes. All companies expect responses to critical and major findings. Seventy-five percent of the companies who responded conduct trend analyses, but their target audience and further utilization vary among the companies. The greatest variation occurred in the mechanism for ensuring that audit findings are considered in the overall process improvement cycle.

In summary, this benchmarking exercise provides assurance that the pharmaceutical industry has made major steps toward a common approach to investigator site audits. This leads to the benefit of individual sites and studies and to the overall process of clinical development.

CQA survey

In order to ascertain prevailing practices in the industry, a survey was conducted in the area of clinical quality assurance (CQA). CQA was defined as the corporate department or function with the main responsibility of assessing and supporting compliance of clinical development projects and systems with GCP standards. This specific survey was conducted in order to obtain benchmark data on the consistency of audit practices and the impact of audit findings originating from investigator site audits.

Survey method

A questionnaire was developed by a working group within the German Association of Research-Based Pharmaceutical Companies (VFA) Clinical Research/Quality Assurance Committee. This questionnaire was distributed in mid-February 2005 to 38 global companies in order to collect the benchmark data via the Internet. The creation and online submission of the survey and capture of feedback was simplified by the use of Zoomerang online survey products. Feedback was received from 29 companies (14 European, 10 American, four Japanese, and one Australian), which corresponds to an additional response rate of 81%. All responses were valid and used in the overall evaluation of the results. Global QA Heads of GCP or their delegates completed the questionnaire. The results represent a global perspective on general performance and reporting of investigator site audits.

It should be noted that four of the companies originally contacted merged during the course of the survey. Out of the four companies, two provided a consolidated response and two did not.

Results

The following results are based on feedback received from the 29 companies. The survey collated information on the following areas:

  • standards for investigator site audits

  • categorization of audit findings

  • classification of audit findings

  • reporting of audit findings

  • distribution of draft and final audit reports

  • responding to audit reports/findings

  • trend analysis of audit findings.

Standards for audits

More than 80% of the companies have a globally standardized audit process in place—i.e., all auditors worldwide are working according to the same audit SOP, audit standard, checklist/worksheets, tools, etc. Global documents and tools in place within the companies are shown in Figure 1.

Global Documents/Tools Within Companies

Categorization of findings

All participating companies, with the exception of one, assign audit findings to process related categories, such as "informed consent," "study protocol compliance," or "safety reporting." Most companies use similar categories for the critical processes that are subject to an audit (see Figure 2). All companies utilizing such categorizations use the categories "safety reporting" and "informed consent." Approximately 90% use the categories "IRB/IEC," "source documents," "drug supply handling," and "protocol compliance," and approximately 80% use the categories "essential documents," "study personnel and agreements," "monitoring," and "data accuracy/SDV." Each of the 14 categories provided in the survey is used by at least 66% of the companies. Additional categories not specifically listed in the questionnaire—such as "trial management," "training," and "computer validation"—are also used by some companies.

Main Categories Used for Audit Findings

Classification of findings

Almost 80% of the companies evaluate the significance of each single audit finding and state this in the audit report. The most frequent classifications used are "critical," "major," and "minor."

In addition, one fourth of the companies have a system or rule to evaluate the significance of each category, stating the overall evaluation of all findings related to each one (e.g., informed consent).

About 50% of the companies classify the overall outcome of site audits. Half of them use predefined classifications, while the other half provide narratives. If predefined classifications are used, they are similar to those used for single audit findings.

Reporting of findings

About 70% of the companies have a multistep reporting process implemented (e.g., draft/final reports); the other companies have a single-step reporting process in place.

For approximately 85% of the companies, the audit report contains an executive summary, which is normally distributed as part of the full audit report.

All companies, with the exception of one, apply QC measures to the audit report generation process—most after peer reviews, conducted by either an auditor colleague or line manager within QA.

Draft and final report

Around 90% of the companies make the draft and/or the final audit report available to the auditees. Two companies neither provide the draft nor the final report to the auditees. About 30% of the companies do not provide management with the draft report, but 97% do provide them with the final report. Only one company doesn't make the draft or the final audit report available to management.

Approximately 50% of the companies send the audit report to other affected functions (e.g., regulatory affairs, regional organization members, training, purchasing, etc.), but from the available information, no pattern could be identified.

Time frame for reporting

The defined time frame between the conduct of the audit and the release of the report varies between 5 and 40 working days (median of 15 working days, mean of 16.6 working days). Thirty-four percent need to issue the report within two weeks, 59% within three weeks, and 76% within four weeks. Ten percent of the companies need to issue the audit report after more than five weeks. Two companies have no defined timeline for the availability of the audit report (see Figure 3).

Length of Time to Report Audits

Critical audit findings

Expedited Reporting. Two-thirds of the companies report critical findings in an expedited manner. Some companies stated that critical findings have to be reported to QA senior management and/or appropriate clinical management either immediately (i.e., during the audit conduct) or within up to 5 working days by phone, email, or meeting.

Clinical study reports. About 60% of the companies report critical audit findings and/or corresponding corrective actions in their final clinical study report.

IRBs/IECs and agencies. Only 17% of the companies report critical audit findings and/or corresponding corrective actions directly to IRBs/IECs and agencies.

Recommendations/requests for corrective actions. Two-thirds of the audit reports contain recommendations or requests for corrective actions for either critical findings or major findings. About half of the audit reports contain recommendations or requests for corrective actions for minor findings. Fourteen percent of the companies do not give recommendations or requests of this kind at all in their audit reports.

Responding to reports/findings

All companies, except one, request responses to each of their audit reports. This company, though, expects responses in the case of critical or major findings as well. In terms of addressing individual findings, all companies expect critical and major findings to be individually addressed. Still, 83% of the companies expect a response to minor findings.

Responsibility for replying. In about 60% of the companies the study manager and the monitor are both responsible for replying to audit findings; in six of the companies, this responsibility lies with either one. Thus, in 80% of the companies at least one of these two individuals is responsible for replying.

In half of the companies, line management is involved in providing responses: For eight companies, in addition to the monitor or the study manager, line management is responsible for replying. In six companies, only line management is responsible for addressing audit findings.

CQA involvement in corrective action. In all companies, except one, CQA is informed about the planned corrective actions for findings requiring a response (i.e., all companies for all critical and major findings, 80% of the companies for minor findings too). In 90% of the companies the adequacy of the corrective actions is assessed by CQA. With regard to the completion of corrective actions, CQA is informed less frequently (70% for critical, 60% for major, and 40% for minor findings).

Three-quarters of the companies conduct checks/controls to find out if corrective actions were performed. However, the individual responses revealed that such checks do not occur via re-audit, but via either further routine audits, documentation review, or spot checks in case of critical findings. A formal audit closure statement is issued for 55% of the companies.

Time frame for responding. The response times vary from 5 to 60 working days (median of 20 working days, mean of 21.9 working days). A response is to be given within two weeks for 12% of the companies, within three weeks for 28%, and within four weeks for 68%. Twenty-eight percent of the companies allow for more than five weeks to respond. Four companies have no defined timelines (see Figure 4).

Response Time to Audit Reports

Trend analysis of findings

Three-quarters of the companies perform systematic trend analyses of audit findings across multiple investigator site audits. The frequency of trending varies considerably from company to company. However, about one-third evaluate data quarterly, another third annually, and the remaining companies either both or ad hoc.

Types of trend analysis. Twenty-two companies reported conducting trend analysis. The following analyses were conducted: across all audits, by study, by project, by therapeutic area, by region, by country, by phase, and by study type (eg., GP or hospital). About 80% perform analyses across all audits, and about 60% perform analyses for specific studies/projects. However, only one-third prepare an audit summary report per study that addresses the main audit findings for the study.

About 30% of the companies performing systematic trend analyses (6 out of 22) calculate an overall compliance rate across audits.

Although 25% of all companies (7 out of 28) have Quality Key Performance Indicators in place, only two of them use the overall compliance rates to calculate Quality Key Performance Indicator(s).

If trend analysis results are calculated (22 companies), about 90% use them to support process improvement, with senior management involved for most of these companies.

To put this into perspective, about 70% of all companies use trend analysis results to support process improvement.

Communication of trending results. Trending results are communicated by 90% of companies conducting trend analyses (n=22) to various functions outside of CQA. These functions are: management (18 companies), study teams (12 companies), project teams (10 companies), functions (e.g., PV) (16 companies), and investigators' meetings (4 companies).

Conclusion

The results of this survey show a uniform process for conducting investigator site audits and reporting audit findings:

  • Audits are typically performed according to global standards and procedures. A majority of companies also use global tools to support the audit process.

  • Audit findings are assigned to categories according to their contents. The categories used are similar across companies.

  • Audit findings are also assessed and classified according to significance, normally by indicating whether they are critical, major, or minor.

  • Audit reports undergo internal quality control before being issued, usually by conducting peer review within QA.

  • Reports are then issued in two steps: first as a draft or initial report to obtain responses and then as a final report.

  • Most companies define timelines for both issuing the audit report and responding to it, allowing more time for responses.

Regarding the consequences and utilization of audit findings in a broader, quality-management framework, the results are less consistent across companies.

  • While a majority of companies send the initial audit report to the direct auditees for responses, about half of them also involve line or functional management in the response process—including the implementation of corrective action. In a minority of companies, the responsibility of responding lies with management only.

  • Except for the fact that critical audit findings are reported internally within the company in an expedited manner, no consistent pattern appeared in the handling of critical audit findings (e.g., whether agencies and ethics committees are notified or whether such findings are included in the clinical study report). Our results cannot determine whether this is due to differing practices across companies or whether such findings are handled on a case-by-case basis.

  • Although a majority of companies perform some form of trend analysis of audit findings, the details of such analyses vary considerably as to how they are performed and to whom the results are presented. However, about 80% conduct trend analysis across all audits, and nearly all companies communicate trending results to senior management.

  • Finally, we obtained no uniform pattern regarding the use of audit findings in formal quality programs such as the implementation and tracking of key performance indicators. However, 70% of all companies stated that they use trending results for process improvement.

For this survey we obtained a remarkably high response rate of 81%, and nearly all responses could be analyzed without further queries. We identified a relatively uniform process of audit conduct and reporting. Despite some common trends, impact and use of audit findings showed more variability. This variation may reflect differences in the organization, standards, and operations of the companies.

Acknowledgement

The authors would like to express their thanks to all participating companies for their valuable input, which made this analysis and publication possible.

Ursula Streicher-Saied,* PhD, is head of global quality management for Bayer HealthCare AG, Germany, email: ursula.streicher-saied@bayerhealthcare.comHeiner Gertzen, PhD, is head of clinical quality and compliance, Europe, for sanofi-aventis, France. Arthur Hecht, Dipl.-Ing, is head of clinical quality assurance for Boehringer Ingelheim Pharma GmbH & Co. KG, Germany. Bettina Nusser is senior manager of clinical quality management, MSD, Germany. Dorette Schrag-Floβ is head of global clinical quality assurance, ZLB Behring GmbH, Germany. Per-Holger Sanden is head of clinical quality assurance, Merck KGaA, Darmstadt, Germany.

*To whom all correspondence should be sent.

References

1. H. Gertzen, A. Hecht, E.B. Ansmann, "Clinical Quality Assurance Benchmarking," Applied Clinical Trials, 70–82 (June 2004).

Related Content
© 2024 MJH Life Sciences

All rights reserved.