A Centralized Monitoring Approach Using Excel for the Quality Management of Clinical Trials

Article

Applied Clinical Trials

Pilot study evaluates the success of building quality into clinical trials using a plan-do-check-act (PDCA) approach to quality management.

Current regulatory and economic incentives are prompting clinical research organizations to integrate risk management strategies into clinical operations and to adopt technological enablers for the efficient monitoring of clinical trials. Presented herein is a method developed to plan (identify factors critical to quality and their relative importance, develop risk mitigation strategies and set adapted monitoring intensity), do (calculate risk metrics and compare them against set thresholds), check (evaluate metrics progress and their respective thresholds and identify risk signals to be acted upon), and act (verify what matters and perform proper mitigative actions) to build quality into clinical trials. A Microsoft Office-based solution has been designed through a Phase III pivotal trial involving 558 subjects treated over three months from 38 sites located in the U.S. and Canada. 

 

Introduction

Today’s drug discovery is meeting difficulties as R&D investments are being restricted and an ever-longer and more costly drug development process faces the clinical trial enterprise.1,2 Approximately two-thirds of the total drug R&D costs are associated with clinical phases of development, of which 70% are allocated to Phase II/III activities.3 The traditional process of sponsor oversight over clinical trials is currently being challenged as research suggests that certain monitoring practices are not optimal at  the clinical phase of development and represent non-value added activities, which unnecessarily increase the costs of clinical studies. 4,5 Keeping in mind that the financial burden of drug development is ultimately transferred to patients and society, the importance of performing clinical research more efficiently while insuring data integrity is an objective.

Monitoring, specifically, refers to the act of overseeing the progress of a clinical trial to ensure that it is conducted, recorded, and reported in accordance with the protocol, standard operating procedures (SOPs), good clinical practice (GCP), and the applicable regulatory requirements.6 Monitoring can represent up to one-third of trial costs.7,8  Interestingly, it has recently been recognized that electronic case report form (eCRF) data are actually “not that dirty” and that 100% source data verification (SDV), which consumes a large amount of monitoring time, has a negligible impact on data quality and subject safety.9 Moreover, it has been recognized that data reach a point at which it can be considered “good enough” and at which more verification will not affect a study’s statistical conclusions.10 Amid the industry’s gradual adoption of eSource technology that further reduces the requirement to perform SDV, the focus of monitoring is thus shifting away from overdoing evaluation of data cleanliness and towards building quality into trials.

The adoption of risk management practices to monitor what matters and perform site visits when it matters could reduce overall costs by more than 20% in large, Phase III trials, and save the industry billions of dollars per year without compromising data quality. Accordingly, there is currently a movement within the industry driven by health authorities to adopt to risk-based/intelligent monitoring practices supported by centralized monitoring to exert oversight over trials and improve resources utilization.11,12,13,14

Building quality into clinical trials and realizing continuous improvement can be achieved with a plan-do-check-act (PDCA) approach to quality management.15,16,17 In the context of clinical research, the different steps of the PDCA framework include various activities that can be considered under the methods of quality by design (QbD), centralized monitoring and risk-based monitoring (RBM), as shown in Table 1.

Table 1: Methods and activities of a PDCA approach to clinical trial quality management

 

Quality by design

The PLAN step can be achieved through QbD, which stands for the development of a process to identify, assess, control, communicate, and review risk to quality.18 Quality results from the ability to effectively and efficiently answer the questions about benefits and risks of a medical product or procedure while ensuring protection of the human subjects.19 Quality can also be defined as the absence of errors that matter and in the context of clinical research, what matters is subject safety, data quality, and trial integrity.20 As such, the goal of QbD is to generate data that are “fit for purpose” (i.e., fit to efficiently support conclusions and interpretations equivalent to those derived from error-free data, and that have been acquired in a manner in agreement with the rights, safety, and welfare of the trial participants).21

 

 

A risk assessment performed prior to the start of a study is an indispensable instrument in the implementation of QbD since it allows identification of the key risk indicators (KRIs), determination of their relative importance during the different phases of a study, and development of the strategies designed to eliminate or mitigate risks. Risk assessment should focus on data and processes critical to subject safety, data quality, and trial integrity and the following questions should be addressed to orient the activities which will be performed in the subsequent steps of the PDCA process:22

  • What factors represent a significant risk to quality in the context of subject safety, data quality, and trial integrity?

  • What proactive steps can be taken to avoid quality problems?

  • What ongoing checks can be performed to detect problems?

  • What type of signal will trigger corrective actions?

  • What steps can be taken to ensure that corrective and preventive actions are focused, sustainable, and efficient?

Different approaches may be used to assess a given protocol’s risks. One method is covered in a position paper published by the TransCelerate BioPharma Initiative, which has also developed a risk assessment categorization tool (RACT) to facilitate risk assessment.23 Similarly, the Clinical Trial Transformation Initiative (CTTI) has produced a non-exhaustive list of critical to quality (CTQ) factors that should be considered in a risk assessment exercise.24 The principles of risk management and the overview of the process are outlined in ICH Q9, which also provides references to various tools that can be used for risk management, including ISO 31010 standards.

Trial risk should be viewed holistically, simultaneously considering several factors which could affect quality. Presented in Appendix 1 below are quality risk indicators inherent to most protocols and which can be monitored from the data collected. These include enrollment rate, screen failures/enrollment, withdrawal/enrollment, out-of-range visit rate, missed doses rate, missing endpoint data, overdue data entry, time to data entry, query rate, time to query resolution, error rate, deviation rate, and adverse event rate.

Appendix 1: Monitoring rationales of key risk indicators

(Click on each section to enlarge)

 

Additional trial-specific quality factors related to safety and efficacy endpoints should also be considered. To generate a comprehensive measure of risk to quality, the quantitative factors listed above should be evaluated in conjunction with qualitative factors such as investigator involvement, trial

Appendix 2; click to enlarge

 master file (TMF) maintenance, source documents quality, and resource sufficiency. This quality information is typically not available in databases but can be captured by monitors after on-site visits in a short site appreciation survey such as the one presented in Appendix 2.

 

 

 

Centralized monitoring

The DO and CHECK steps of the PDCA framework can be efficiently accomplished through a centralized monitoring process. Centralized monitoring refers to the analysis of CTQ data against quality targets and the communication of risk signals to orient clinical operations and data management activities, including monitoring visits, source data review (SDR), medical review, and risk mitigative actions. Centralized monitoring implies ongoing collection and analysis of the information with technological enablers such as electronic data capture (EDC) and clinical trial management system (CTMS) and the risk index (RI) model describes one method for the computation of risk metrics from data collected with such systems.

 

Risk-based monitoring

According to regulations, sponsors are responsible for the oversight of clinical investigations and must use efficacious monitoring practices to ensure protection of the human subjects and the quality of data. Sponsors are advised to consider factors such as the objective, design, complexity, size, and endpoints of a trial in determining the extent and nature of monitoring for a given trial.25,26 Monitoring visit frequency should be based on workload indicators such as the enrolment rate, the number of adverse events (AEs) reported and the amount of data requiring verification.27 Using a combination of SDR and source data verification (SDV), monitoring efforts should focus on processes and data related to critical (primary and secondary objectives) study endpoints, informed consent, eligibility criteria, protocol-required safety assessments, subject withdrawals, study blind, test article administration, and accountability.

SDR involves the review of documentation and processes to assess investigator oversight in different areas (e.g., protocol compliance, informed consent, study blind, delegation of responsibilities, test article administration and accountability, unreported events, GCP and ALCOA principles adherence, etc.) SDR is necessary to evaluate areas that do not have an associated data field available for remote review.28 The process of SDR can yield a qualitative impression of sites captured in a short site appreciation survey, such as the one presented in Appendix 2, completed by reviewers at the end of all visits.

SDV is the process by which data within the CRF or other data collection systems are compared to the original source of information to confirm that the data have been transcribed accurately. While regulators require that SDV be performed, there is no specification regarding the amount of data required. Regulations state that a representative number of records should be reviewed and that a statistically controlled sampling is an acceptable method for selecting the data to be verified.29 The level of SDV is thus left for the sponsor to decide while the amount chosen by sponsors should be relevant and targeted.30 The level of required SDV should be based on site experience and quality risk metrics, especially Query and Error rates, and be periodically re-evaluated. It is important to realize that SDV is only one of many tools available for building quality into trials and that although it can limit the amount of entry errors, it cannot efficiently mitigate all types of risk. For example, if a site experiences abnormal Deviation Rate or Screen Failure/Enrollment, providing additional training on the protocol and reviewing the enrollment process would be appropriate actions, whereas increasing SDV would have no significant risk-mitigation effect.

The last step of the PDCA framework, ACT, can be achieved using a RBM strategy that relies on risk metrics to orient mitigative actions, to verify what matters and to perform site visits when it matters. This method stands in contrast with the traditional monitoring method of verifying 100% of the data collected and performing site visits at fixed intervals. RBM emphasizes targeted SDV; namely thorough verification of critical data related to subject safety, primary endpoints, and specific risks, in conjunction with random verification of non-critical data. As such, SDV may be reduced to as little as 10-25% of the data collected. RBM also aims to schedule monitoring visits based on risk signals and workload in order to optimize monitoring efforts. RBM, therefore, requires the information generated from centralized monitoring to perform more efficient monitoring without compromising quality.

 

The risk index model

The RI model has been developed to assist central monitors in the calculation of site-specific KRI metrics and the generation of associated risk signals (i.e., triggers), according to user-defined thresholds. The model uses periodic datasets exported from EDC or CTMS systems as inputs for the computation of the RI, which represents a compound measure of all KRI metrics and provides a holistic view of site-specific risk. The RI is calculated as follows for a given study phase:  

Where:

H: Handicap multiplier of 2 attributed to sites with an elevated perceived risk, such as sites with low experience or known quality issues.

T: Trigger value assigned to a given site-specific KRI metric that falls outside set limits.

-      Outside Limit = 1

-      Outside Critical Limit = 10

R: Relative risk rank assigned to a given KRI during the different study phases.

-      Low Risk = 0.5

-      Medium Risk = 1

-      High Risk = 5

Hence, the RI is calculated from KRI metrics which fall outside normal limits and takes into consideration KRI’s relative importance as well as sites’ perceived risk. During the course of the study, the RI provides a comprehensive impression of a site’s risk and its variations serve to indicate improvement or worsening of site risk. Individual site’s KRI which falls outside their set limits constitute risk signals that bring about analyses, and mitigative actions are performed accordingly.

An Excel application, herein referred to as the RI calculator, has been developed to provide data analytics and visualization of KRIs in a dashboard that guides the implementation of actions according to this model.

 

 

Key risk indicators

KRIs are selected based on their detectability, their probability of occurrence, and their potential to impact quality. Risk factors which are chosen for centralized monitoring should have associated data periodically available from EDC and/or CTMS. The rationale for their selection and the actions to be considered by data management and/or clinical operations in cases where values fall outside set limits should be established through the process of risk assessment. A list of KRIs common to most protocols and the typical actions chosen in response to triggers are indicated in the Appendix 1 sections illustrated earlier.

Trigger values. The RI is specifically calculated from KRIs trigger values. A Trigger value of 1 is assigned to a site’s KRI metric value that falls outside the set lower (LL) or upper limits (UL). A Trigger value of 10 is assigned to a site’s KRI metric value that falls outside the set lower critical (LCL) or upper critical (UCL) limits. Thus, a given KRI trigger value times its relative risk yields its contribution to the RI. 

KRI's relative risk rank. KRIs relative risk ranks are based on their relative probability of occurrence and their potential impact on data quality, subject safety or trial integrity. Table 2 below illustrates a numeric system which can be used to rank KRI’s risk categories can be established by using a risk ranking method to rate their importance as low, medium, or high at the different phases of study start-up, execution, and close-out. Indeed, the relative risk rank of KRIs is subject to change as sites progress through the different phases of a trial and the risk proximity varies (i.e., different risks’ probability of occurrence may vary as a function of experience, data acquisition rate, or treatment exposure, and the number of options available to limit their impact may decrease as a study nears the end).

Table 2: KRI’s risk-ranking system

Table 3 below shows the typical output of a KRIs relative risk ranking at different phases of a clinical trial. For the RI calculation, the weight values of 0, 0.5, 1, and 5 are respectively assigned to the Non-Existent (N/E), Low (L), Medium (M), and High (H) relative ranks of KRIs. Accordingly, as demonstrated in Table 4 below, a RI of 10 or more will result if a KRI of m0edium or high importance falls beyond a critical limit. Accordingly, a RI increase of 10 points represents the risk tolerance threshold above which particular attention must be paid.

Table 3

Table 4

            Tables 3 and 4; click to enlarge

 

Perceived risk. To account for factors such as limited experience/expertise, geography, outstanding deviations or a constantly elevated RI, sites may be considered as having a High perceived risk in which case the RI will be multiplied by 2. Consequently, for a High perceived risk site, a RI increase of 10 or more is observed if a KRI in the High rank falls outside a normal limit or a risk indicator in the Low or Medium rank falls outside a critical limit as illustrated in Table 5 below. This handicap can be removed in consideration of a favorable review of the site RI and monitoring reports.

Table 5: Impact of trigger values on RI variation according to KRI relative risk rank for sites with High perceived risk

 

Limits and outliers analysis. Initial KRI limits set for a particular study should consider quality targets and if available, data from previous studies. Studies which had similar objectives, used comparable subject populations, and were of the same length make ideal comparators. Limits should be re-evaluated periodically with regard to overall sites’ distribution.

 

 

Different statistics can be used when evaluating limits and outliers. Namely, KRI metrics means (m),standard deviations (s) as well as the 95.4% and 99.7% confidence intervals, corresponding to u ± 2o and u ± 3o, respectively, can be taken into consideration when evaluating limits. Moreover, the cumulative probability that an observed value will be less than or equal to a given KRI metric value can be taken into consideration when determining if a site is an outlier. Appendix 3 below illustrates the calculated metrics along with these statistics as they are displayed in the RI calculator. In order to detect relevant risk signals with significant statistical power, the first outlier analysis should be performed after a sufficient amount of data points have been collected. Nevertheless, early analysis can still serve to detect outliers even though statistical power may be low.

Appendix 3: Metrics worksheet example

(Click to enlarge)

(Click to enlarge)

 

The analysis of KRI metrics dispersion permits to evaluate whether data anomalies encountered at different sites are random or systematically arising due to inherent issues. Statistical analysis serves to identify signals but a statistical signal does not constitute an actual proof of risk. Outlying behavior may be indicative of risk but may also be affected by issues not actually reflective of risk (e.g., high number of missed doses calculated because one subject forgot to return his drug diary). Therefore, the ultimate decision to perform mitigative actions must be made on a case-by-case basis after analysis by relevant stakeholders.

KRI history. Metrics’ histories permit to monitor risk over the short and long term. Study- and site-specific trends can indicate if a given risk is improving or worsening as a result of the actions performed, thereby allowing measurement of their impact. Risk indicators’ histories thus support the traceability of decisions made during the course of the trial. For example, if we highlight site 68 as depicted in Figure 1, we see that the site experienced a high error rate on Aug 01 and that the action performed in response lowered the error rate. Nevertheless, the site experienced a high error rate again five weeks later and the action performed in response did not seem to resolve the issue as the error rate increased again on the following risk analysis. In-depth analysis and issue escalation should be considered in such a case. Figure 1 below also illustrates how limits were changed after collecting two months of data and observing that initially set limits would not effectively identify outliers given the actual sites’ error rates distribution.

Figure 1: Error rate history chart highlighting Site 68

RI history can be evaluated to make decisions regarding perceived risk. Figure 2 below shows that through the early phase of the trial, site 68’s risk index varied slightly, without ever varying by more than 10 (points) and, therefore, did not warrant any particular attention. On Aug 01, Site 68 experienced an elevated risk index and mitigative actions apparently resolved the issue as the index declined for the two consecutive reviews. Later, starting on the Aug 28, the risk index increased for three subsequent reviews. Such a scenario may support setting the perceived risk to HIGH if no reasonable justification for the RI increase is available since there appears to beis a degree of uncertainty about quality control at that site. Conversely, if the RI does not increase significantly for a number of consecutive reviews and no outstanding issue exists for a specific site, one may consider setting the perceived risk to NORMAL and thus abate the risk signal. 

Figure 2: RI history highlighting Site 68

One must take into consideration more than the data when “perceiving” risk and the ultimate perception should relate to the necessity of amplifying a given site’s risk signal as determined by the sponsor’s clinical operations team. In fact, in order to perceive real risk and perform relevant mitigative actions, it is important to consider qualitative information that goes beyond calculated metrics such as comments provided in a monitor’s site appreciation survey, email communications with the sites, and observations made during the in-depth analysis of KRIs falling outside limits.

 

 

Site status. As mentioned earlier, site-specific workload and risk should be considered for an efficient scheduling of monitoring visits. Appendix 4 shows an example of a Site Status Table containing such information. The number of subjects enrolled since the last monitoring, the number of AEs recorded since the last monitoring visit, and the number of pages which require SDV should be considered when estimating workloads. 

Site-specific query rate and error rate are especially important KRIs to consider when determining required SDV. The approach used in the model presented herein consists in performing 100% SDV of all data points that are critical and prone to error and performing random SDV of all other data. 100 % SDV is performed for the first two subjects enrolled at a site so as to monitor a sufficient amount of data to support statistical analysis of site’s risk and generate confidence in the site’s competence before reducing SDV. Required random SDV percentage is initially set to 25% and is subsequently adjusted based on risk metrics, especially error and query rates. It may be set to a minimum of 10% for sites that are consistently performing well. Conversely, for sites that do not meet error tolerance limits, it may be elected to maintain the required SDV percentage at 25%, to increase it to 50%, or to require an additional subject 100% SDV.

 

Risk review & reporting

FDA’s risk-based monitoring guidance stipulates that monitoring findings should be evaluated to determine whether additional actions (e.g., training of clinical investigator and site staff, clarification of protocol requirements, etc.) are necessary to ensure human subject protection and data quality across sites.31 European Medicines Agency (EMA) guidance also states that it is an essential part of the risk-based quality management system that review should take place as additional information becomes available.32 As regulators are specifically looking for evidence of actions taken to manage identified risks, centralized monitoring reports should be produced periodically to document the analysis of outliers and specify what actions, if any, were performed to mitigate risks, in order to demonstrate oversight of participating investigators. Periodic reporting serves to communicate findings and to follow-up on issues with pertinent parties, including data management, clinical operations, and QA.

Figure 3 at right represents the process of action implementation and documentation instituted in conjunction with the RI model. It implies the use of a dedicated central quality monitor responsible for the calculation of risk

Figure 3; click to enlarge

signals and their initial evaluation. Risk signals are reported to and discussed with clinical operations. QA is involved if major quality issues are identified. Every signal evaluation, follow-up and actions are document in risk monitoring reports. Reporting frequency may vary from weekly to monthly depending on the data acquisition rate, being typically higher during the study start-up period and lower during the close-out period.

The following sections are included in reports in order to provide a comprehensive overview.

 

(Click to enlarge)

Dashboard. The Dashboard section is an output from the Excel-based RI calculator and gives an overall view of the current RIs and KRIs for all the sites along with their distribution and respective triggers.

 

Study phase adjustment. This section serves to document site-specific study phase changes. For example, statements such as these are included in this section:

  • Site 05 - Study Phase is set to “Execution” as two subjects completed a Baseline Visit.

  • Site 27 - Study Phase is set to “Close-Out” as all subjects have completed a Termination Visit.

(Click to enlarge)

Limits. This section indicates KRI limits values along with their distribution statistics u ± 2o and u ± 3o, computed in the RI calculator, to assist in their periodic re-evaluation. It also serves to document changes made to the limit values along with the rationale for the changes.

Outlier analysis and actions. This section documents the root cause analysis and includes qualitative information for every KRI value that falls outside the

(Click to enlarge)

respective limits (e.g., the nature of queries for an elevated query rate, the nature of AEs for and elevated AE rate, the fact that a coordinator was on vacation for an elevated time to data entry). It also indicates the actions taken and follow-up on issues raised in previous reports. This reporting format demonstrates control over risk signals that are of special interest to regulators. Supporting documents including email communications with relevant

(Click to enlarge)

stakeholders should also be kept to illustrate the full chain of events. 

RI History. This section documents the variations of site-specific RI over a given number of reviews computed in the RI calculator.

Perceived risk adjustment. This section specifically documents site-specific perceived risk adjustment. It may contain statements such as:

  • Site 05 - Perceived Risk is set to “High” as a new site coordinator recently started.

  • Site 06 - Perceived Risk is set to “Normal” as the RI has not increase for three consecutive reviews and no outstanding issue is noted.

Required SDV adjustment. This section indicates site-specific required SDV % adjustment supported by analysis of the RI history. It may contain statements such as:

  • Site 04 - Random SDV is increased from 10 to 25% as abnormal Query and Error rates have been observed.

  • Site 36 - Random SDV is lowered from 25 to 10% as the RI has not increased for three consecutive reviews, Query and Error rates are within the normal limits and no outstanding issue is noted.

 

 

Conclusion

The pilot study using the RI model has shown a high level of control over quality attributable to the early detection and early communication of issues. High enrollers were closely followed. Issues related to data entry errors were quickly detected and resolved with supplemental site training before requiring extensive corrective actions. Abnormal AE rates led to additional medical reviews to ensure that the safety of subjects was not compromised and that under/over-reporting was not at play. Some performance issues led to the modification of processes at the sites and, in some cases, to the scheduling of audits and the initiation of CAPAs. Documentation of site quality metrics further provided measures on which to base site selection for subsequent studies.

Periodic risk analysis and reports kept monitors informed about the status of their sites and provided them with information concerning site’s needs, which was hardly available from other systems. Calculated risk signals matched monitoring reports findings, which confirmed their validity. Importantly, risk signals were often available ahead of significant monitoring report findings. The verification task of monitors was also decreased by the requirement to perform partial SDV in a targeted risk-based manner. 

An Excel workbook used in conjunction with the RI model was shown to constitute a controlled, reliable, flexible, and low-cost tool suitable to support a centralized monitoring approach for the implementation of QbD. Since the MS office suite represents a platform that is already an integral part of most companies, it offers the flexibility needed to integrate QbD with a sponsor’s pipeline, processes, and systems.

Since quality management shifts from on-site monitoring to centralized monitoring, the new clinical trial monitoring paradigm includes the role of data management for computing KRI, their ongoing analysis, and their reporting to the appropriate stakeholders. The RI model represents a method to guide a risk-based/intelligent monitoring approach using simple statistics as relevant site-specific quality metrics. The present approach efficiently supports the objective to build quality into trials in a cost-conscious manner.

 

Adam Beauregard is Clinical Data Manager; Lyne Lavoie is Director, Data Management; and Fernand Labrie is Founder and CEO; all with EndoCeutics Inc.

 

References

1.  Munos B. Lessons from 60 years of pharmaceutical innovation. Nature Reviews Drug Discovery 8. 2009; 959-968.

2.  Goodman M. Pharmaceutical financial performance. Nature Reviews Drug Discovery 8 (12). 2009; 927–928.

3.  Desai PB, Anderson C, Sietsema WK. A comparison of the quality of data, assessed using query rates, from clinical trials conducted across developed versus emerging global regions. Drug Information Journal 46 (4). 2012; 455-63.

4.  Clinical Trials Transformation Initiative. Quality Objectives of Monitoring Workstream 2 Final Report Project: Effective and Efficient Monitoring as a Component of Quality in the Conduct of Clinical Trials. 2009.

5.  Paul SM, Mytelka DS, Dunwiddie CT, et al. How to improve R&D productivity: the pharmaceutical industry’s grand challenge. Nature Reviews Drug Discovery 9, 2010; 203-214.

6.  ICH guideline for Good Clinical Practice E6(R1).1996.

7.  Getz KA. Low hanging fruit in the fight against inefficiency. Applied Clinical Trials (20). 2011; 30–32.

8.  Eisenstein EL, Lemons PW, Tardiff BE, et al. Reducing the costs of Phase III cardiovascular clinical trials. American Heart Journal 3 (149). 2005; 482-488. http://www.ahjonline.com/article/S0002-8703(04)00591-5/abstract 

9.  Sheetz N, Wilson B, Benedict J, et al. Evaluating source data verification as a quality control measure in clinical trials. Therapeutic Innovation & Regulatory Science (48). 2014; 671-680.

10.  Tudur Smith C, Stocken DD, Dunn J, et al. The Value of Source Data Verification in a Cancer Clinical Trial. PLoS ONE. 7(12). 2012.

11.  Transcelerate BioPharma. Position Paper: Risk-Based Monitoring Methodology. 2013.

12.  FDA. Guidance for industry oversight of clinical investigations – a risk-based approach to monitoring. 2013.

13.  European Medicines Agency. Reflection paper on risk based quality management in clinical trials. 2013.

14.  MRC/DH/MHRA Joint Project. Risk-adapted approaches to the management of clinical trials of investigational medicinal products. 2011.

15.  ISO 9001:2008. Quality Management System.

16.  FDA. Oversight of Clinical Investigations: A RiskBased Approach to Monitoring (Draft Guidance) presentation. Oct 2011.

17.  Moe Alsumidaie. Conquering RBM: A Story on Medtronic's Risk Based Management Methodology. Applied Clinical Trials. Oct 14, 2014. http://www.appliedclinicaltrialsonline.com/conquering-rbm-story-medtronics-risk-based-management-methodology   

18.  ICH guideline on quality risk management (Q9). 2014.

19.  Toth-Allen J. Building Quality into Clinical Trials – An FDA Perspective. May 14 2012 Presentation.

20.  CTTI. Quality Objectives of Monitoring Workstream 2 Final Report. Project: Effective and Efficient Monitoring as a Component of Quality in the Conduct of Clinical Trials. 2009.

21.  Assuring Data Quality and Validity in Clinical Trials for Regulatory Decision Making: IOM Workshop Report. 1999

22.  CTTI. Workshop on Quality Risk Management: Understanding What Matters. Jan 29-30, 2014. http://ctti-clinicaltrials.org/files/QbD_QRM/QRM-Workshop-Agenda-FINAL.pdf.

23.  Transcelerate BioPharma. Position Paper: Risk-Based Monitoring Methodology. 2013. 

24.  CTTI Quality by Design Workshops Project. Critical to Quality (CTQ) Factors. 07-Jan-2012. http://ctti-clinicaltrials.org/files/documents/QRMworkshop-PrinciplesDoc.pdf

25.  ICH guideline for Good Clinical Practice E6(R1).1996.

26.  ISO 14155:2011, Clinical investigation of medical devices for human subjects – Good clinical practice, sections 5.7 and 6.3.

27.  Cooley S,  Srinivasan B. Triggered Monitoring. Moving beyond a standard practice to a risk based approach that can save sponsors time and money. Applied Clinical Trials. Aug 1, 2010.

28.  Transcelerate BioPharma. Position Paper: Risk-Based Monitoring Methodology. 2013

29.  ICH guideline for Good Clinical Practice E6(R1).1996.

30.  Transcelerate BioPharma. Position Paper: Risk-Based Monitoring Methodology. 2013. 

31.  FDA. Guidance for industry oversight of clinical investigations – a risk-based approach to monitoring. 2013

32.  European Medicines Agency. Reflection paper on risk based quality management in clinical trials. 2013.

© 2024 MJH Life Sciences

All rights reserved.