Central RBM Supports Reduced Cost, Higher Data Quality

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-05-01-2020
Volume 29
Issue 5

Implementing centralized risk-based monitoring can help meet strict GCP requirements for study conduct, oversight, and recording.

Good clinical practice (GCP) focuses on data quality and integrity. Clinical trial sponsors must demonstrate strict oversight of studies to ensure proper conduct, safety of study subjects, and accuracy and completeness of clinical data.

Centralized risk-based monitoring (RBM) of clinical trials greatly enhances this process. Traditionally, oversight of a clinical study includes on-site data monitoring, performed by or on behalf of the sponsors, with monitors visiting each study site at defined intervals to perform 100% source data verification (SDV) of information. This oversight is a labor-intensive, costly, and inefficient process.

Electronic CRFs and technology enhance trial oversight

Increasing use of electronic case report forms (eCRFs) opened the door to alternatives that provide more efficiencies and cost advantages than the SDV approach. The status of the eCRF whether it is entered, not entered, completed, open queries etc., provides valuable instant information to the statistician. A dashboard is a useful tool; it provides a visualization of these study progress metrics, along with offering insight into the study’s data at a glance. The centralized RBM alternative improves monitoring cost-effectiveness without compromising quality and integrity. It identifies trial areas at greatest risk and implements targeted measures and controls to manage trial quality. Additionally, RBM helps improve clinical trial design, conduct, oversight, recording, and reporting, while ensuring human subject protection and reliability of trial results.

With ongoing RBM, cumulative data can be examined at subject and site levels, flagging potential errors that must be queried or systematic errors in process that may occur at a site (e.g., measurements that look too low or too high as compared to other sites). The data monitoring team can then take remedial action. For example, this could trigger an on-site monitoring visit, or further site training. Review of query rates by site, subject, or form can reveal possible data quality issues. Quality tolerance limits (QTL) can be set and monitored to focus resources on vulnerable areas to guide the level of action required. Centralized monitoring is discussed and encouraged in FDA guidiance1 and an European Medicines Agency (EMA) Reflection paper.2

Key risk indicators detect potential issues 

Key risk indicators (KRIs) are critical data, as are other study variables or operational data that can be measured throughout a study to detect potential compliance issues before they become a problem. Operational data can highlight site-level concerns but have potentially limited direct impact on subject safety and data integrity. They can be visualized in a dashboard format for ease of monitoring. Dashboards are a way of automating the integration of the KRI datasets and help users to see outputs at a glance, spot trends, and compare metrics side by side. An example of a KRI is duration of open queries (see Figure 1).

Click to enlarge

 

Quality tolerance limit can trigger investigations

QTL is a level, point, or value associated with a trial variable that should trigger an investigation if a deviation is detected in order to determine if there is a possible systematic issue (i.e., a trend has occurred). QTLs are essential for the integrity of a trial, including key endpoints and patient safety. Whereas KRIs typically prompt risk mitigation actions at site level, QTLs are monitored at the trial level and are predefined before the trial commences from a review of historical data from similar trials and, where possible, using statistical methods and modeling.

A QTL for protocol deviations can be created and tested using simulated data. Unusually high levels of deviations may indicate an issue at that site, but unusually low levels of deviations may indicate underreporting. QTLs should identify both. Investigations made in real time increase the chance of determining root causes. For example, deviations arising from missing protocol-required assessments may be due to inadequate resources, overlooked training needs, or equipment failure. Early identification of the root cause enables corrective actions or procedures to be put in place and subsequent impact on trial quality mitigated. 

Easy data visualization against the QTL is key to successful RBM implementation. The example in Figure 2 below may be plotted against calculated limits to identify breaches and be seen easily in a dashboard format.

Click to enlarge

 

Statistical methods can help identify patterns

Centralized monitoring provides access to cumulative data across sites. The use of statistical methods helps find unusual or implausible patterns in the data to indicate, for example, potential manipulation or rates of adverse events (AEs); if one site has a low rate of AEs comparatively, this might indicate under-reporting, or difficulty in how to classify AEs based on symptoms, which should flag further investigation. Other patterns that can be used to identify potential data issues include:

Lack of variability: If a site or subject shows much less variability in a measurement than other subjects/sites, this may indicate the data is not real and trigger further investigation.

Digit preference: Data that are invented by people tend to show preferences for certain numbers, such as rounding up. Data can be examined to see if the rates of any of the digits are higher than expected.

Inliers: Clusters of values very close to the mean may indicate fabricated data. 

Centralized RBM: A successful clinical trial

In summary, centralized RBM helps streamline trials while alleviating labor-intensive and costly SDVs. At the same time, it improves data quality by guiding and prioritizing site visits and setting and monitoring QTLs using statistical methods. All this adds up to the GCP mission of strict oversight and improved and more efficient approaches to clinical trial design, conduct, oversight, recording, and reporting while ensuring participant safety and clinical study data accuracy and completeness.

 

Sheelagh Aird is Head, Clinical Data Operations, PHASTAR

 

References

1. Guidance for Industry: Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring, Draft Guidance.  http://www.fda.gov/downloads/Drugs/Guidances/UCM269919.pdf

2. EMA Reflection Paper on Risk-based Quality Management in Clinical Trials. https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-risk-based-quality-management-clinical-trials_en.pdf

© 2024 MJH Life Sciences

All rights reserved.