Risk Assessment and Mitigation

April 1, 2012
Charles A. Knirsch

,
Phillip B. Chappell

,
Jose Alvir

,
Demissie Alemayehu

Applied Clinical Trials

Applied Clinical Trials, Applied Clinical Trials-04-01-2012, Volume 21, Issue 4

A quantitative approach to enhancing risk assessment and mitigation in drug development.

Regulatory authorities routinely conduct inspections to ensure compliance with good clinical practice (GCP) in the conduct of clinical trials sponsored by pharmaceutical companies. The objectives of such inspections include the protection of the rights and safety of study subjects; the assurance of the quality and integrity of data; and the assessment of adherence to protocols, relevant regulatory guidelines and standard operating procedures for clinical trials. The inspections normally involve a careful evaluation of relevant documents, facilities, records, and other resources related to the trial under consideration, and may be conducted at the investigational site, at the sponsor's facilities, or at other appropriate establishments.

According to a recent FDA report,1 the most common compliance inspection deficiencies by clinical investigators include failure to follow the investigational plan, protocol deviations, inadequate record keeping, inadequate accountability for the investigational product, and inadequate subject protection-including informed consent issues. The corresponding deficiencies for sponsors and monitors include inadequate monitoring, failure to bring investigators into compliance, and inadequate accountability for the investigational product.

Regulatory agencies have several options at their disposal to address deficiencies. In particular, at the FDA the actions may include untitled or warning letters, invocation of the Application Integrity Policy, refusal to accept site or study data, denial of NDA/BLA/PMA, sharing information with Office of Criminal Investigations for pursuit of prosecution, and initiation of site disqualification procedures.2

In view of the significant impact of noncompliance on public health, medical science, and long-term viability of companies, many sponsors have put in place corrective action and preventive plans to enhance GCP in the conduct and reporting of their clinical trials. Generally, the plans involve identification of potential risk factors for noncompliance based on inputs from internal and external stakeholders, and dedicating resources to proactively monitor these risk factors to mitigate noncompliance. The approaches are generally resource intensive, and may often fail to address the core issues in an optimal and effective manner.

Previous studies have discussed various aspects of risk assessment in clinical trials, including a comprehensive list of factors that require careful attention.3,4,5 Most of the studies, however, fail to give adequate guidance for a structured approach to risk mitigation and analysis. A recent study by Brosteanu, et al.6 attempted to give a framework for a systematic procedure leading to risk-adapted quality management for noncommercial trials.

In this article, we discuss the value of a quantitative approach to risk assessment and mitigation that involves use of statistical techniques to select relevant risk factors with high predictive values of studies or sites that may be prone to noncompliance. It is argued that prospectively planned data-driven and model-based approaches can help optimize resources utilization with maximal impact. While the modeling exercise introduces objectivity in the search for important risk factors, it should not by itself be viewed as a sole determinant of an optimal set. A judicious approach is one that integrates the model findings with subject-matter expertise, taking into account relevance and operational feasibility.

It is noted that quantitative methods have previously been proposed in the detection of fraud in clinical trials.7,8 To our knowledge, our approach has not been reported in the literature as a viable tool for streamlining risk assessment initiatives in clinical trials.

 

 

Methodological requirements

Identification and definition of risk factors. The first step in enhancing risk mitigation initiatives through a quantitative exercise involves a careful identification and definition of factors that have potential predictive value for noncompliance. Such factors may include variables that are inherent in the study design, or may be ones that emerge in the conduct of the study. Design level risk factors include procedures specified in the protocol, such as sample size, number of procedures, number of study visits, inclusion/exclusion criteria, length of study, drug administration, and study population. Several risk factors are often defined to characterize vulnerable patient populations, and may be functions of age, gender, ability to consent, disease state, and other socio-economic attributes.9,10 Asset characteristics, such as the mechanism of action of the drug, the drug class, and phase of development, as well as drug supply chain issues should also be defined as potential risk factors. In addition, the list should include operational factors, such as monitoring plans, use of vendors, and site-related parameters. Table 1 gives a list of potential risk factors that may be considered for use in a quantitative risk mitigation program.

Analytical procedures. Depending on data availability, the statistical procedures that can be used may range from simple descriptive statistics to complex model selection techniques. The unit of analysis may be a study or a site within a study. The dependent variable can be binary or count, where the outcome of interest is a compliance inspection deficiency (i.e., noncompliance with the main objectives of good clinical practice).11 A preliminary analysis involves an assessment of the existence of association between the dependent variable and the individual risk factors, using standard parametric or nonparametric tests. As in any model selection exercise, the purpose of the univariate analysis is to reduce the number of risk factors for use in a multivariate model.12

For the multivariate analysis, alternative statistical procedures may be utilized, depending on the nature of the dependent variable. For binary response, a stepwise logistic regression may be implemented, while for count data (e.g., number of audit findings), other generalized linear models, including Poisson regression, Poisson-gamma, or zero-inflated Poisson regression analysis may be conducted. When multiple sites are inspected for some of the studies, generalized estimating equations procedures that take into account the correlation structure should be considered. Other data mining tools, including classification and regression trees,13,14 random forests,15 or naïve Bayes classifiers,16 may be applied to explore further the relevance of the various risk factors in mitigating noncompliance risk.

Data collection. A key step in the quantitative exercise is the collection of reliable data for the statistical analysis. The data may be obtained retrospectively from studies and sites that have undergone compliance inspection, or it may be collected prospectively from ongoing and future studies. The data obtained retrospectively may not always be complete, since the relevant information may be impossible to obtain or readily unavailable. While the prospective approach is likely to give more complete and reliable data, the data may not be available for immediate use.

Implementation strategies

The implementation of a quantitative approach may be accomplished in two phases. The first phase may involve identification of relevant risk factors by a team consisting of clinicians, statisticians, pharmacometricians, and other functional lines, and the collection of data on the pre-specified risk factors from appropriate studies that underwent internal inspection, or other suitable databases. In the second phase, univariate and multivariate analyses may be performed to evaluate the relative predictive values of the risk factors identified.

Typically, the data collected retrospectively may be inadequate for definitive identification of the relevant risk factors. There may be too many missing values on several of the covariates to conduct effective multivariable analyses. In addition, certain operational risks may not be readily available retrospectively. However, the findings of the first phase exercise may help plan a prospective data collection strategy in the second phase. This will involve collecting data on the relevant risk factors from ongoing studies as an integral part of the development program. A more thorough statistical analysis can then be executed to validate the findings of the first phase and to identify a parsimonious set of risk factors to mitigate risk.

 

 

Discussion and conclusion

In this article we highlighted the role of data-driven approaches to enhance the effectiveness of corrective and preventive plans to ensure compliance with GCP requirements. A recommended approach is one that involves use of retrospective data to define and streamline risk factors and a scheme that involves prospective data collection for model validation. Although a retrospective exercise in data collection may appear attractive, particularly in terms of saving time and resources, it may not always permit the gathering of information for all variables of interest. While certain design-related variables may be retrieved from clinical trial protocols, study reports, and other relevant documents, certain operational risk factors may be difficult or impossible to recover for completed studies. For an effective modeling exercise, it would therefore be advisable to put in place a plan that permits the prospective collection of data on pre-specified variables as an integral part of the study design, conduct, and reporting.

Once relevant risk factors are identified, an effective risk mitigation strategy would involve instituting measures to focus resources on impacted studies. For factors that are associated with operational aspects of the conduct of the study, monitoring visits should be aimed at remedial measures in close collaboration with the study site personnel. Factors that are inherent in the design of studies could also be managed similarly, but in addition should be used to implement proactive measures for new studies. A recent draft guidance offers a detailed discussion of risk-based monitoring strategies and plans.17

It is emphasized that while the proposed approach may serve as a tool to optimize resource utilization through systematic use of available data, it should not be viewed as a substitute for excellence in the design, conduct, and reporting of clinical trials.

The quantitative approach can also be extended to the related issue of minimizing major and minor protocol deviations in randomized clinical trials (RCTs). Protocol deviations and violations, even when they do not have direct and substantive effects on the safety of study participants, can have implications on study outcome and interpretation of results. Although regulatory guidelines have been issued for handling them in the analysis and reporting of the data,18 the retrospective measures are generally not fully satisfactory in dealing with the issue. It is therefore advisable to prospectively mitigate the problem in the framework discussed in this article.

Demissie Alemayehu,* is Executive Director, e-mail: alemad@pfizer.comJose Alvir is Senior Director, Phillip B. Chappell is Executvie Director, and Charles A. Knirsch is Vice President all at Pfizer Inc., 235 East 42nd Street (Bldg. 219-8-57), New York, NY.

*To whom all correspondence should be addressed.

 

 

 

References

1. J. P. Salewski, "FDA Expectations of Clinical Trials and Investigators," slideshow, http://bit.ly/o3whhI.

2. Food and Drug Administration, Inspections, Compliance, Enforcement, and Criminal Investigations, (FDA, Rockville, MD, 2012), http://www.fda.gov/ICECI/Inspections/IOM/default.htm.

3. C. Baigent, F. E. Harrell, M. Buyse, et al. "Ensuring Trial Validity by Data Quality Assurance and Diversification of Monitoring Methods," Clin Trials 5 (1) 49–55 (2008).

4. P. H. Bertoye, S. Courcier-Duplantier, and N. Best, "Adaptation of the Application of Good Clinical Practice Depending on the Features of Specific Research Projects," Therapie, 61 (4) 271–277 279–285 (2006).

5. C. Ohmann, O. Brosteanu, B. Pfistner, et al. "Systematic Review About Data Quality and Protocol Compliance in Clinical Trials. GMS Med Inform Biom Epidemiol, 4 (1) (2008).

6. O. Brosteanu, P. Houben, K. Ihrig, et al., "Risk Analysis and Risk Adapted On-Site Monitoring in Noncommercial Clinical Trials, Clinical Trials, 6 (6), 585-596 (2009).

7. S. Al-Marzouki, S. Evans, T. Marshall, I. Roberts, "Are These Data Real? Statistical Methods for the Detection of Data Fabrication in Clinical Trials," BMJ, 331, 267-270 (2005).

8. M. Buyse, S. L. George, S. Evans, et al., "The Role of Biostatistics in the Prevention, Detection, and Treatment of Fraud in Clinical Trials," Stat Med, 18 (24) 3435-3451 (1999).

9. C. H. Coleman, "Vulnerability as a Regulatory Category in Human Subject Research," J. Law Med Ethics, 37 (1) 12-18 (2009).

10. A. S. Iltis. "Introduction: Vulnerability in Biomedical Research," J Law Med Ethics, 37 (1) 6-11 (2009).

11. International Conference on Harmonization,"Efficacy Guidelines," http://www.ich.org/products/guidelines/efficacy/article/efficacy-guidelines.html.

12. David W. Hosmer and Stanley Lemeshow, Applied Logistic Regression, 2nd Ed. (John Wiley and Sons, New York, 2000).

13. Leo Breiman, Jerome Friedman, Charles J. Stone, and R.A. Olshen, Classification and Regression Trees, (Wadsworth & Brooks/Cole Advanced Books & Software, Monterey, CA, 1984).

14. Trevor Hastie, Robert Tibshirani, and Jerome Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, (Springer Verlag, New York, 2001).

15. A. Liaw and M. Wiener, "Classification and Regression by RandomForest" R News, Vol 2/3, 18 (2002), http://cran.r-project.org/doc/Rnews/Rnews_2002-3.pdf.

16. P. Domingos and P. Michael, "On the Optimality of the Simple Bayesian Classifier Under Zero-One Loss," Machine Learning, 29, 103–137 (1997).

17. Food and Drug Administration, Guidance for Industry: Oversight of Clinical Investigations - A Risk-Based Approach to Monitoring, (FDA, Rockville, MD, 2009).

18. European Medicines Agency, "ICH Topic E 9 Statistical Principles for Clinical Trials, Note for Guidance on Statistical Principles for Clinical Trials," (1998), http://www.emea.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500002928.pdf.

download issueDownload Issue : Applied Clinical Trials-04-01-2012

Related Content:

Peer-Reviewed Articles