Ensuring Quality in Clinical Research with Centralized Statistical Monitoring, On-Site Monitoring, and Clinical Data Management

Article

Organizations implementing RBM continue to struggle with a number of questions regarding the relative contributions to quality of on-site monitoring, centralized statistical monitoring, and clinical data management reviews—and what role each activity should play.

The clinical research industry is actively adopting risk-based monitoring (RBM) and risk-based quality management (RBQM) as primary methods for managing quality. There is growing evidence supporting both the importance of centralized statistical monitoring (CSM) and the relatively low value of Source Data Verification (SDV) and Source Data Review (SDR). One compelling piece of evidence is a quantitative analysis conducted by TransCelerate, published in DIA Journal in 2014, which revealed that, on average, SDV impacts only one percent of site-entered case report form (CRF) data.1 Nevertheless, organizations implementing RBM continue to struggle with a number of questions regarding the relative contributions to quality of on-site monitoring, centralized statistical monitoring, and clinical data management reviews—and what role each activity should play.

This article reports on the outcome of a pertinent use-case, which presented a unique opportunity to gain further insight and clarity regarding these questions. The use-case involves a global clinical trial for which an issue was uncovered near completion of the study execution phase. In particular, nine of the investigative sites were identified to have had a gap in on-site monitoring. The study sponsor conducted thorough monitoring of all nine sites in order to address the situation, including 100% SDV/SDR of all subject records. This corrective monitoring activity resulted in identification of a number of data discrepancies across the nine sites, including source-to-electronic Case Report Form (eCRF) transcription errors and missing patient log information including adverse events (AEs), concomitant medications (ConMeds), and medical histories (MedHx).

Following study completion, the sponsor team, in collaboration with CluePoints, identified an opportunity to retrospectively evaluate the relative contributions of CSM, on-site monitoring, and clinical data management reviews to clinical research quality. This article describes the approach taken for this evaluation, presents the results, and discusses the implications for optimal quality management.

Corrective monitoring exercise

The corrective site monitoring exercise was conducted on a global study evaluating patients with a chronic disease. There were over 1000 patients enrolled in the study across more than 90 participating sites. The study was managed under a functional outsourcing model, in which site monitoring and data management services were provided by a CRO partner. RBM/RBQM methods were not leveraged for quality management, and in particular there was no centralized statistical monitoring or related risk assessments, and the site monitoring plan called for regular visits to each site with 100% SDV/SDR coverage.

It was discovered near completion of the study that nine sites located in the same country had not been subjected to on-site monitoring per plan following initiation of these sites. One of the corrective actions taken was to send monitors to each of the nine sites to perform a thorough SDV/SDR of all relevant patient source records. All findings were documented and corrections made as appropriate to patient eCRF data. The findings from this corrective monitoring effort are summarized in the table below, and fall into four distinct categories:

  1. Missing AEs—adverse events that were evident in patient source records but not recorded in the eCRFs.
  2. Missing ConMeds—concomitant medications that were evident in patient source records but not recorded in the eCRFs.
  3. Missing MedHx—medical histories that were evident in patient source records but not recorded in the eCRFs.
  4. Data Corrections—data entered into the eCRF that needed to be corrected to match corresponding data in the patient source records.

Table 1: Corrective monitoring findings—summary

A total of 131 discrepancies were identified across the four categories that required corrective action. The most common discrepancy by volume was missing ConMeds with a total of 46, compared to missing AEs with a total of 33. The percentage of total reported AEs that were recovered (11.0%), however, was quite similar to the percentage of recovered ConMeds (10.9%). Table 2 presents a summary of these results.

Table 2: Percentage of AEs and conMeds found via SDV

This result provides important confirmatory evidence to support the same metric reported by TransCelerate in a 2014 DIA Journal article.1 The TransCelerate analysis—performed retrospectively on 1168 completed studies—applied a less direct method (temporal association) to identify AEs that were more likely to have been reported into the sponsor eCRF only following relevant on-site SDV activity. The results suggested that the percentage of total reported AEs that were discovered via on-site SDV fell into a range between 7.5% and 11.8%.

While the sample size for the corrective SDV exercise is relatively modest (nine sites, 95 subjects), it provides a more direct observation of the percentage of total AEs recovered specifically by on-site SDV. The measured rate of 11.0% is congruent with the TransCelerate range of 7.5% to 11.8%. Additionally, the measured rate of 10.9% of ConMeds found via SDV—a metric not included in the TransCelerate analysis - provides at least a strong indication that the expected frequency of ConMeds found by SDV should be in a similar range to that of AEs.

Centralized statistical monitoring exercise

Following completion of the study, the sponsor team and CluePoints collaborated to evaluate the impact of CSM in identifying risks across the study. A statistical engine referred to as SMART lies at the heart of the CluePoints platform. Originally designed to help detect potential fraud, SMART has been expanded over a period of 20 years to cover a more comprehensive set of quality-related issues which are summarized in Table 3 below.

Table 3: CluePoints SMART engine—types of risk identified

This engine was executed against all of the eCRF data in the target study, across all sites and enrolled patients. The core output of the engine is a matrix of statistical scores representing P-values, which enable identification of any highly unusual data patterns at individual sites. An average of about five tests were applied per variable per site, which meant that many thousands of test results (P-values) were generated in the target study. An overall score is computed for each site based on a weighted average of all of the site’s p-values, which enables rapid identification of sites that are the most at-risk in the study.

An analyst reviewed the results and created formal “risk signals” to report all identified risks back to the study team. Each risk signal is a grouping of one or more unusual test results for a site that are related to each other—typically by data domain, variable, and/or test type. A total of 78 risk signals were created for this study, associated with 20 sites. Table 4 presents a sampling of the risks identified for six out of the 20 sites, which were selected for this table as they provide a very representative view of the type of risks identified across all 20 sites.

Table 4: Sample of CSM findings

A variety of risks were identified across the 20 sites. Examples include high rates of non-compliance in completion of patient assessments, lack of expected variation in patient measurements and scores, and under-reporting of AEs and ConMeds. Interestingly, only one of the sites with risk signals in the CSM review—site 6-H—was among the nine sites that required corrective monitoring. Furthermore, there were no common findings between corrective monitoring and CSM for this site. The corrective monitoring did not identify any of the risks uncovered by CSM, and CSM did not identify any of the data discrepancies discovered during corrective monitoring. It is also clear that the full on-site monitoring of all sites during the study was not effective at identifying the risks uncovered by CSM at 20 of the sites.

The sponsor team conducted a thorough, cross-functional team review of all 78 risk signals generated from the CSM analysis. Table 5 below presents a representative sampling of individual CSM risk signals along with a summary of the sponsor team assessment for each signal.

Table 5: Sample CSM risk signals and sponsor team assessment

53 out of the 78 risk signals—or 68% overall—were considered risks that would have warranted follow-up either directly with the relevant site (42) or through further investigation (11). Additionally, the sponsor Data Management team identified an opportunity to enhance their data reviews (e.g., addition of programmed edit checks or other data trending reviews) in order to support the detection and/or follow-up for 20 out of the 78 risk signals. A summary of these assessment results are presented in Table 6.

Table 6: Summary of sponsor team assessment of vendors CSM risk signals

Discussion

The use of CSM following completion of this study reveals some significant and related observations. First, the sponsor team concluded that CSM was effective at identifying significant data quality risks that were considered worthy of follow-up with the relevant sites. It is also clear that the traditional quality management activities performed during this study, including regular on-site monitoring and 100% SDV, were not effective at identifying the important risks found by CSM. This is not an unexpected result, since CSM methods statistically compare each site’s data patterns to the trend across all other sites in the study. Individual site monitors simply do not have access to this context information to enable recognition of a given site’s unusual data patterns. This is true of traditional data management reviews and checks as well and reinforces a key shortcoming in traditional quality management practices, one which can be addressed only by centralized monitoring which includes well-designed CSM methods.

The sample CSM findings presented in Table 5 help to reinforce this point. For example, Site 9-B was found to have an unusually high rate of unresolved AEs (20 of 26 compared to just 33% across the study). An astute site monitor or data manager might conceivably have noticed this suspect pattern during their traditional reviews, but this is unlikely since they do not generally have the necessary context available to make such assessments. In particular, they wouldn’t have been given an awareness that the trend across all sites and patients in the study is for a much lower rate of unresolved AEs. Statistical methods also provide the ability to describe exactly how unlikely a given data pattern is, something that humans on their own are not adept at doing. The CSM statistical test that assessed the proportion of unresolved AEs at site 9-B yielded a P-value = 0.01, putting the likelihood of this site’s proportion occurring by random chance at less than or equal to 1%.

A more sobering conclusion of this review is the prospect that our industry has been failing to identify and address a significant volume of important quality-related issues prior to the adoption of CSM methods. A review of marketing submissions to the FDA between 2000 and 2012 supports this conclusion, revealing that up to one-third (32%) of first-cycle review failures (up to 16% of submissions overall) were failed due to quality issues.2 Considering the immense investment in time, effort and money needed to take new investigational products through clinical development, this is indeed a startling statistic and convincing evidence that traditional approaches to quality management have not been sufficient.

The fact that traditional site monitoring and data management reviews, along with other quality management activities, have not been sufficient to ensure requisite quality in many past studies, does not necessarily imply that these reviews are completely without merit. Instead, the results of this use-case support the conclusion that data management reviews and a modest, targeted level of on-site monitoring still play meaningful, complementary roles to CSM in the delivery of a high-quality study.

It is true, on the other hand, that the corrective monitoring of nine sites which applied 100% SDV/SDR was effective in identifying a number of data discrepancies that were not identified by CSM. And the most significant outcome of the corrective monitoring effort was the discovery of a volume of patient log events in source records that had not been transcribed into the patient eCRFs. As previously discussed, 11.0% of the final count of AEs reported at the nine sites were recovered through the corrective monitoring effort. The CSM tests did not identify any risk associated with AE or ConMed under-reporting at these nine sites, simply because the reporting rates at these sites prior to the corrective monitoring effort were already within a statistically expected range when compared to the reporting rates across all other sites in the study. More specifically, the variability in log event reporting across all sites in the study was large enough that an under-reporting of up to 30% would often keep such sites within the expected range.

One might be tempted to conclude from this outcome that full on-site SDV/SDR should be retained at least for the purpose of recovering missed patient log events including AEs. However, the previously mentioned TransCelerate analysis predicted a similar range of AEs recovered by SDV, which indicates that there was nothing unusual or “out-of-bounds” for these nine sites with respect to log data transcription. The TransCelerate article proposed that only a modest level of SDV be retained in general, with a relatively higher focus of that SDV effort on AE reporting. The current use-case provides no evidence that would challenge these conclusions. Table 7 below provides some additional supportive evidence, specifically that none of the AEs recovered via corrective monitoring at the nine sites were of a serious or severe nature. Such an observation is understandable, since one would expect that investigative site staff are generally much more attentive to the management and reporting of significant AEs experienced by their patients than those that are mild (or otherwise insignificant) in nature. This implies a relatively lower overall risk related to the prospect of a modest level of un-recovered AEs.

Table 7: AE Counts by Seriousness and Severity

It should additionally be noted that CSM methods are indeed an important and effective tool to support the detection of potential AE (and other log event) under-reporting. Table 5 presents an example of this, where CSM identified AE and ConMed under-reporting at site 1-B—a risk which was not identified by the on-site monitoring reviews performed at this site. Similar findings were identified at several more of the 20 sites identified as at-risk by CSM. 

In addition to CSM methods, there are several data management reviews that are also effective at identifying missing patient log events, and can further reduce the need for reliance on the much more expensive practice of exhaustive direct source inspection to identify missed log events:

  1. Review of abnormal lab test results to ensure that relevant AEs have been reported.
  2. Review of each patient’s AEs and ConMeds to ensure that expected ConMeds are present for each AE, and vice-versa.
  3. SAE reconciliation reviews, to ensure that all reported serious AEs are properly captured in the eCRF record.

Final thoughts

CSM is, at least in its current form, primarily focused on identifying operational study riskin the form of atypical patterns of data at specific sites (or patients, countries, etc.). The focus of Data Management reviews and on-site SDV, on the other hand, is on the identification and resolution of discrepancies pertaining to individual patient data records or values and not primarily on the identification of statistically unusual data trends or patterns. CSM therefore operates at a higher, more contextual level, which is effective at identifying site and study issues that are likely to be more significant and impactful than those issues identified by transactional, record-by-record inspection processes. Well-placed data management reviews and automated EDC queries still play a value-added role in the overall data quality management process, especially as they represent relatively low-cost activities with a meaningful return on that investment. A much more modest level of on-site monitoring also continues to be of value, especially if it removes reliance on SDV as a primary monitoring activity.

Steve Young is the CSO, and Marthe Masschelein is the Manager, Data Analysts, both for CluePoints.

References

  1. Therapeutic Innovation
& Regulatory Science
2014, Vol. 48(6) 671-680
“Evaluating Source Data Verification as a Quality Control Measure in Clinical Research”
  2. http:/jama.jamanetwork.com/article.aspx?articleid=1817795
© 2024 MJH Life Sciences

All rights reserved.