OR WAIT 15 SECS
An extended survey was conducted on the topic of clinical quality assurance (CQA) benchmarking in pharmaceutical companies to gather new data.
An extended survey was conducted on the topic of clinical quality assurance (CQA) benchmarking in pharmaceutical companies to gather new data. It had been some years since Barnett's articles had been published.1-3 For this purpose, CQA was defined as the corporate department or function that uses auditing, QA-consultation, and other QA activities to assess and support the compliance of clinical development projects with good clinical practice (GCP) standards.
As a survey instrument, a questionnaire was developed by a working group within the Verband Forschender Arzneimittelhersteller e.V. (German Association of Research-Based Pharmaceutical Companies) GCP/CQA committee. This included a pilot, feedback from the committee, and subsequent improvements.
This questionnaire was distributed in November 2000 to 43 global VFA member companies to collect CQA benchmarking information. Answers were received from 23 companies. From these, 19 were evaluated after extensive cross-checking and querying. The following results are based on the 19 respondents' answers.
Companies were categorized into three groups according to size, indicated by turnover, employees worldwide, and R&D budget, as shown in Table 1.
In order to gather information regarding CQA reporting lines, organizational charts were requested. The answers were assigned to four different reporting structures as shown in Table 2.
CQA departments in smaller companies are reporting to higher hierarchical levels (that is, to the board) than in medium-sized or large companies where the CQA function often reports into a global function covering GCP, GLP, and GMP. The relationship between these three "good practice" units is shown in Table 3.
Typically, CQA is either placed alone or together with both GLP and GMP-QA, but never only with GMP. There is no obvious relationship between good practice organization and company size. For three smaller companies, CQA is combined with GLP-QA, which may be due to a closer relationship between preclinical and clinical development activities and the relatively small size of the QA units.
Capacity is an important parameter of benchmarking. Figure 1 shows the average number of CQA staff, differentiated between CQA professionals and CQA administrative staff for large, medium, and small companies. Figure 2 shows the personnel capacity in relation to the R&D budget.
There are two main points in this relationship. First, in order to establish basic CQA activities, a certain critical mass of staff is necessary. This is seen from the cluster of companies in the lower-left corner of the graph.
Second, with increasing R&D budgets, QA units become larger. However, this relationship does not seem to be linear. After a steep increase up to 10-15 persons, the slope becomes much more flat, as reflected by the logarithmic fit.
Figure 3 shows that global companies are represented with CQA units in all three ICH regions (the EU, the United States, and Japan), while this is not necessarily the case for smaller companies.
In order to describe current audit practices, information about the individual CQA audit strategies was collected for the main audit types and the different development phases, where Phase III was subdivided into pivotal (IIIp) and supportive (IIIs). Each response was classified into one of the following audit strategies:
All. All trials of a specific phase are audited.
Important. Only important trials are audited.
Sample. A sample of trials is audited with different sample sizes. (For this strategy, in contrast to the important strategy, the sampling criteria could not be determined in detail.)
System. No individual trials are audited; only the process or part of the process of clinical trials.
None. No trials of a specific development phase are audited.
Trial site audits. Except for pivotal Phase III trials, the distribution of audit strategies is quite similar for all phases of clinical development. The importance of the trial is the primary criterion for auditing. This is followed by a less-specific sampling strategy. Only a few companies audit all clinical trials.
More than 50% of respondents audit all pivotal Phase III trials. As pivotal Phase III trials are important by definition, we combined the numbers from the all and important categories, which shows that nearly 70% of companies audit all Phase III trials. Nevertheless, about 30% of CQA departments have different approaches (five use the sampling method, one uses the system method), even for pivotal Phase III trial audits. These results are summarized in Figure 4.
If, according to the listed strategies, a trial is selected for audit, the sample size is about 15%-20% of active sites for Phases II and IIIp. This is reduced to about 10%-15% for Phase IIIs and to 5%-10% for Phase IV.
Document audits. As shown in Table 4, only a few CQA units do not perform protocol or report audits. However, more than one-third do not perform CRF audits. The sample of audited documents is mostly determined by development phase or importance.
Supplier audits. For supplier audits (in particular, CROs and laboratories), the following was observed: Hardly any company is auditing all suppliers. The sample is determined by importance of the study and the frequency of vendor use. Some companies indicated that they have a preferred partner concept in place.
System audits. Regarding system audit strategies, a clear majority of companies (84%) adopt a system audit approach for safety reporting. For the other areas, such as data management or study monitoring, about half of the companies favor a system audit approach beyond covering those areas in trial-related audits.
From our survey data, it cannot be excluded that some types of system audits, notably IT/validation audits or trial drug supply system audits, may in fact be conducted by other QA functions such as IT-QA or GMP-QA. With respect to study monitoring, some respondents indicated they consider trial site audits as a tool to effectively assess monitoring performance and therefore did not conduct system audits in addition. An overview of system audit strategies is provided in Figure 5.
Pre-inspection audits. For pre-inspection activities, companies show a uniform behavior: Almost 90% of CQA units perform pre-inspection audits and support inspection preparation and escort of the inspectors.
A trial site audit consisting of an in-house and an on-site audit requires 72 hours on average, with a high standard deviation of 24 hours. This variation is also apparent in Figure 6, which shows results for the 19 individual companies. Outliers include company numbers 14 and 22. One reason for the additional time needed on-site may be due to the strategy of auditing routinely as a team of two auditors. Intense preparation may explain additional in-house time.
A remarkably low proportion of time for in-house activities was observed for company numbers 6, 13, and 20. This may be due to covering several sites within one combined in-house audit visit.
Despite the observed variation in the responses, similarities among companies are obvious. Figure 7 shows that 50% of the companies require between 53 and 83 hours, with a median of 68 hours for conducting a complete trial site audit.
Analyzing these times further, we looked at the components necessary for an audit: preparation, travelling, conduct, and postprocessing, separated for in-house and on-site audits. The results are shown in Figures 8a and 8b.
The conduct of an audit at the investigational site accounts for less than 25% of the overall time spent for an on-site audit. The main variability comes from differences in preparation and postprocessing times (ranging from 3-35 hours and 4-48 hours for on-site audits, and 0-16 hours and 1.5-24 hours for in-house audits).
For pre-inspection audits about 14 hours more are spent compared to trial site audits.
Figure 9 shows the average number of audits of the four main types per year for large, medium, and small companies. For all companies, trial site audits account for about half or more of their audits.
Smaller companies perform a higher proportion of document audits (45% vs. 26% for both A and B companies). In contrast, they perform a lower proportion of system and supplier audits. Even for large global companies, which show the highest proportion of system audits, these account for only about 6% of all audits conducted.
The proportion of supplier audits increases with company size, comparable to system audits.
As shown in Table 5, the process of audit reporting is very similar among companies. They use a multistep procedure with built-in QC steps (mostly peer review). The only exception is the expedited reporting of critical issues.
Involvement of CQA units in corrective action is less uniform with regard to providing recommendations and review of corrective action performed. The majority of CQA units evaluate and categorize audit findings, however the analysis of audit findings across audits is done less frequently.
Twelve out of 19 companies outsource some audit activities. The proportion of outsourced activities is up to 25% for 10 of these 12 companies.
Most companies estimate that the additional in-house CQA capacity needed for handling and supervising outsourced audits is as much as 30%. Details are shown in Table 6. Included are responses of three companies that did not outsource during the period of this survey.
All of the companies that are outsourcing CQA activities do so for trial site audits. Half of them also contract out in-house audits. All other audit types are contracted out to a smaller extent.
Audits are mostly contracted out to independent auditors or CROs specializing in audit. For 75% of the companies, this is combined with a preferred auditor concept.
Almost 75% of the CQA units do have their own budget for outsourcing audit activities.
Consultancy, standard operating procedures (SOPs), and training are covered by almost all CQA units. About half of the units are involved in IT validation activities, whereas QC activities are rarely performed by CQA. For details, see Table 7.
While CQA is normally involved in both SOP and training activities, it is rare that CQA is managing these activities. Details are shown in Table 8.
To summarize the benchmarking results, the degree of similarity in practices across companies for the areas covered by the survey was assessed and is shown in Table 9.
As the main purpose of this article is to present objective CQA benchmarking data, no further interpretation is attempted here. It is neither possible nor appropriate to define a "gold standard" for CQA activities. However, the areas with a high degree of similarity might be used as an indicator for common industry practice.
1. S.T Barnett, F. Harwood, R.D. Zuercher, "Clinical Quality Assurance Units: Trends & Structures,"
Applied Clinical Trials
, October 1994, 42-52.
2. S.T. Barnett, "Assessing Clinical Quality Assurance Units," Applied Clinical Trials, June 1997, 40-50.
3. S.T. Barnett and R.W. Croswell, "Structuring the Quality Assurance Function," Drug Information Journal, 32 (3) 629-637 (1998).
Heiner Gertzen, PhD, is head of GCP Quality Assurance Europe at the European Drug Development Center of Aventis Pharma, Paris, France, +33 1 5571 6107, email: email@example.com. Arthur Hecht, Dipl-Ing, is head of clinical quality assurance at Boehringer Ingelheim Pharma GmbH & Co KG.Eva B. Ansmann, MD, is an independent consultant for GCP-QA and CRO-selection with INCROSS-QC.
Related Content:Industry Data