Paradigm Shift

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-08-01-2013
Volume 22
Issue 8

Risk-based monitoring ushers in the move from the traditional experimental design.

Clinical trials follow the premise of scientific discovery: there is a hypothesis that a product will perform to an expected result. An experiment is performed to verify the hypothesis. Results from the experiment refine the hypothesis leading to theory-driven experiments yielding confirmation that the product performs as expected. Human trials require dual theories; not only must the product work but it must also be safe.

Does the product work?

Most of us were taught in the beginning of school that we must control the parameters of a science experiment; the beakers must be sterile, the water distilled, etc., ensuring control of the environment to achieve predicted repeatable results. Similarly, the clinical trial process looks for control of all inputs. Monitoring processes concentrating on 100% source data verification were born from these practices. However, it is slowly being recognized that predictable study results require verification that critical values are correct—not that all variables are correct.

Is the product safe?

If the experiment is controlled and values verified, does this ensure product safety? Or, does it confirm that what we postulated as risks in study design are appropriate but not what the true safety profile is? If monitoring safety does not affect outcomes of product effectiveness, then vigilance of information coming in becomes a higher priority.

Risk-based monitoring and program optimization

A clinical trial experiment is defined by its protocol and the supporting plans to execute the various aspects of its design. Risk-based monitoring (RBM) is a shift in paradigm from the traditional experimental design and execution to clinical trials in that it brings focus. RBM is a collection of techniques to bring focus to what is important to the study, ensuring critical data is correct and uncovering missing or hidden data to provide the fullest safety profile of the product. One method to perform this is through quality by design: begin with the protocol to state the initial questions that need to be answered and pursue the answers through the execution of the trial.

The risks for a clinical trial can be broken down into four parts:

  • Product not aligning with intended population

  • Incomplete or incorrect data capture

  • Data signals and patterns during study are not detected

  • Action not taken

The first item is dependent upon the science of the product. As trial designs allow for more fluidity to respond to the information collected by new technologies (biomarkers and statistical techniques), we expect to see trial optimization change dramatically over time. The other items are within our control using current technologies. Data capture systems now allow for edit checks, allowing queries back to the sites verifying data correctness. As data is collected, patterns can be seen on what the experiment is telling us. These patterns can be both biological and operational. Reviewing a particular lab and its values on scatter plots with normal range values brings one series of patterns of data to respond to. A singularly high hematocrit value in a blood pressure study may mean nothing; whereas, a pattern of high or low values will bring pause. Similarly, operational patterns can reveal potential risks. A site that consistently enters data months after patient visits may indicate risk and need for additional monitoring. In both examples, risks can be predetermined so that reports can drive attention to the abnormalities. For a successful clinical trial, one must continuously look at risks and make adjustments to the monitoring plans and communications.

There are many factors to consider when operationalizing decreasing study data collection risk. Most of these can be navigated by planning and processes. Well-defined data monitoring roles are clearly placed on this list in a discussion of source data verification versus source data review. Verification implies the quality control of transcription from source data to the electronic data capture system, while review is a process that should be performed by individuals with a medical background. The first asks what is missing during transcription; the second asks what is missing based on supplementary data. A patient taking migraine medication within an hour of dosing study medication should perhaps have an adverse event recorded for the migraine as well. This medical review can be performed off-site, in a central area, either by clinical data management or clinical monitoring depending upon the qualifications of the personnel. What is important is that this be a data-driven review. Communication plans and processes must be in effect to ensure such information is brought forward to bring focus.

A second factor should be the continuous risk assessment via critical variables, safety trends, and site performance. Here, statistics can be of assistance. The review of trends and compounded derived variables allows the statisticians to predict future behavior of data based on recent past information. This can be complex Bayesian predictions or simple continuous variable patterns as used in baseball. The use of standard data formatting assists tremendously in the operational implementation of such predictions.

The use of data standards should be the third factor in study optimization. Including CDISC data standards into the processing allows quicker reuse of complex code and portability for trials not just in analyses but for operational review too. Building reusable, dynamic reports to assist a medical data reviewer searching for patterns in the data requires a solid foundation of the data definition. The tools for stating when additional monitoring is necessary need to be easy and reusable so that time spent training is minimized and data searching is optimized. As the promise of electronic medical health records becomes more of a reality, standards will drive this ability to detect signals even quicker.

Risk-based monitoring and continuous program optimization hold the promise of expedited findings to give patients new products. Through medical data review, risk assessment, and analytical methodologies, a trial can decrease what risk may occur while potentially shortening trial time and costs.

Mark Penniston is Senior Vice President and General Manager, Clinical Analytics, at Theorem Clinical Research, 1016 West Ninth Avenue, King of Prussia, PA, e-mail: [email protected].

© 2024 MJH Life Sciences

All rights reserved.