Clinical trials sponsors seek quick subject enrollment and high data quality, expressed by both strict adherence to good clinical practice (GCP) requirements and completeness and correctness of the data collected from investigative sites. However, the most informative sources of detailed information on data quality such as site monitoring visit reports, sponsor, and CRO audit reports are maintained as strictly confidential documents and are not publicly disseminated. Therefore, a substantial proportion of the information on data quality in clinical research that is available to the general public is based on anecdotal reports rather than well-referenced and organized observations. The U.S. Food and Drug Administration found no evidence of poor GCP compliance during inspections in the emerging clinical research countries, including Eastern Europe and the former Soviet Union.1-2
Figure 1. Worldwide recruitment by region. The numbers in the segments indicate actual number of patients recruited in the region.
The query generation rate (QGR), the number of case report form (CRF) data queries generated per subject, is one index of overall quality and performance. Minimizing QGR should reduce data management workload, costs, and time to database lock.
There has been a great deal of discussion about the comparative quality of data from countries/regions participating in multinational trials.3-5 However, there is little debate that the recruitment rate per country is a direct indicator of workload that could influence data quality either positively or negatively. In general, a higher regional recruitment rate increases recall and retention of conventions for CRF completion due to more frequent training. A lower recruitment rate may diminish quality due to less frequent training.
Applying similar logic, one would also expect to observe higher quality data from sites coming on board later in the trial than from those that started earlier. Survival of the initial "trial and error" period during which the majority of potential ambiguities in protocol wording and CRF instructions are consistently resolved is important.
The aim of this study was to evaluate query generation rate as an indicator of data quality using data from a recently completed Phase 3 GCP trial to identify major factors influencing quality. Recruitment rate, duration of participation in the trial, English language fluency, and the region where the data came from were all considered as covariates in this study.
We assessed the number of queries on a per investigator basis from a recently completed Phase 3 GCP trial performed by 118 investigators from 15 countries. We grouped the trial sites into three regions:
- Group WST, 45 sites-the United States, Canada, Australia, and New Zealand
- Group EU, 25 sites-Ireland, Greece, Italy, Spain, United Kingdom, and Israel
- Group EE, 48 sites-Czech Republic, Georgia, Poland, Romania, and Russia.
All WST countries, the United Kingdom, and Ireland were considered as countries with English as a native/official language.
Of the 824 subjects recruited, 520 (63%) were enrolled in EE, 85 (10%) in EU, and 219 (27%) in WST (Figure 1).
Our analysis was not designed to provide solid statistical comparisons, but was rather an illustration of trends and a description of factors that could presumably influence quality. Numerical values are presented as means Â± standard deviations.
Duration of participation in the study and recruitment rate.
Mean duration of participation in the study was longest for WST investigators (18Â±5 months), while EU and EE investigators participated in the trial for shorter periods (12Â±7 and 11Â±6 months, respectively). Recruitment rate in EE (1.3Â±1.6 subjects/month) was markedly higher than in other regions (0.4Â±0.4 and 0.3Â±0.5 subjects/month for EU and WST, respectively). Twenty-six sites (22%) enrolled only one subject each during the course of their participation in the study. Of those, two were from the EE region, 12 from EU, and 12 from WST.
Query generation rate. QGR was plotted against the recruitment rate and stratified by region and English language nativity (Figure 2). While a review of the general data shows that there is an inverse relationship between the recruitment rate and QGR (a higher recruitment rate is associated with somewhat lower QGR, Figure 2A), no such correlation could be noted in the EE sites, which showed low QGR regardless of the actual recruitment rate (4Â±1 query/subject vs. 12Â±5 query/subject for WST and 8Â±2 query/subject for the EU sites, Figure 2B). Similar observations refer to the comparisons of native English-speaking versus non-native English-speaking investigators (Figure 2C).
Figure 2. Relationship between the query generation (QGR) and the subject recruitment rates shown for all investigators (A) as well as split by region (B) and by English language nativity (C). Each filled circle corresponds to one investigator. Extremely low recruiters (average recruitment rate per month approaching zero) generally showed poor data quality expressed in QGR (number of queries per subject).
Plotting QGR against the length of the investigator's recruitment period showed no clear correlation between when sites joined the trial and QGR (Figure 3).
Our study was a descriptive attempt aimed at identifying factors affecting quality of data. To the best of our knowledge, no similar analysis of QGR as a combined indicator of quality and performance has been published. However, observations of fewer queries per CRF in Eastern Europe have been published.
Our study showed that clear differences in data quality exist between regions revealed in an unbiased comparison using a simple index-QGR. Our study does not provide enough information to make statements concerning all reasons for that finding. Moreover, while an inverse correlation between recruitment rate and QGR was observed in the study, this was not the case for Eastern European investigators who recruited well and, on average, showed a low rate of mistakes. The quality of the EE group's performance showed little correlation with workload as expressed by subject recruitment rate. One may associate this with cultural differences that have been extensively discussed elsewhere.7-8
We believe, however, that one myth of clinical research in a multinational arena has been debunked as a result of our analysis. Previous theories have suggested that the language barrier existing outside the English speaking regions, in Eastern Europe in particular, negatively influences the quality and integrity of data collected in multinational studies. Our findings support the opposite hypothesis, which itself raises interesting questions.
Figure 3. Correlation between the query generation rate (QGR) and the duration of the recruitment period (months). The shorter recruitment period indicates that the sites joined the trial at its later stage.
Why should such a marked difference in QGR exist between native English investigators and others, whose knowledge of the English language may vary from excellent to very elementary? Among possible (mainly speculative) explanations is that English as native language may create nuances in the interpretation of what the CRF is requesting for certain data fields, resulting in site data recording errors and, ultimately, more queries.
We analyzed the data from only one trial, limiting our ability to generalize our findings. We also understand that the majority of sites that contributed data to the trial were previously unfamiliar with the sponsor's CRFs. Therefore, they may provide a more pure index of expected data quality than if we were to compare the QGR data obtained from a series of similar trials using the same sites and the same or similar CRFs across studies. Nonetheless, performing QGR or similar comparisons on a large database from recently performed or currently running multinational Phase 3 mega-trials would substantially improve our understanding of the factors affecting quality of data, and could potentially affect data management plans and investigator selection.
We invite all those who are interested in the topic for further discussion and cooperation.
1. P. Platonov, "Clinical Trials in Russia and Eastern Europe: Recruitment and Quality,"
International Journal of Clinical Pharmacology and Therapeutics
, 41 (7) 277-280 (2003).
3. R. Coker and M. McKee, "Ethical Approval for Health Research in Central and Eastern Europe: An International Survey," Clinical Medicine, 1 (3) 197-199 (2001).
4. E. friederichs, S.G. Spitzer, R. Bach, "Efficacy and Quality in Clinical Trials: Requirements to the Investigator Site," Arzneimittelforschung, 44 (2) 182-184 (1994).
5. N.J. Dent, "Is ICH exportabole outside the European Union?" Quality Assurance, 8 (1) 19-31 (2000).
6. D. Babic and I. Kucerova, "Benchmarking Clinical Trials Practices in Central and Eastern Europe," Applied Clinical Trials, May 2003, 56-58.
7. J. Demeter, "Selecting Sites and Investigators. An Approach for Central and Eastern Europe," Applied Clinical Trials, March 2002, 56-66.
8. S. Varshavsky, "Discover Russia for Conducting Clinical Research," Applied Clinical Trials, March 2002, 74-80.
The authors would like to acknowledge and express their gratitude to Dr. Erik Ruuth (Aventis Pharma, France). Dr. Ruuth's energy and enthusiasm inspired many productive discussions, which ultimately led to the initiation of this evaluation.