CRA Skills Lacking in Critical Areas

Article

Applied Clinical Trials

A global CRO’s data gathered using an objective monitoring simulation administered by CRA Assessments, LLC reveals that CRAs are consistently underperforming regardless of the level of experience or training.

Clinical Research Associates (CRAs) play a crucial role in clinical trials including ensuring site regulatory and protocol compliance, data quality, and integrity. In general, CRA experience is accepted as a key determinant related to the quality of monitoring, along with the extent of generalized CRA training to support monitoring readiness. However, a global CRO’s data gathered using an objective monitoring simulation administered by CRA Assessments, LLC (CRAA) reveals that CRAs are consistently underperforming-especially in areas critical to maintaining site compliance, data integrity, and quality-regardless of the level of experience or training. CRAA believes that use of their objective, monitoring simulation is very well suited for evaluating the monitoring competency of CRAs.

In the traditional, routinely used method of assessing CRA monitoring competency, CRAs are accompanied to different clinical research sites and assessed by different evaluators (with varying experience levels), judging CRA performance using varying processes and by identifying different sets of issues within the respective sites. Additionally, no meaningful compilation of outcomes or tangible measures of improvement from assessment to assessment are obtained. Unfortunately, all of this variability and lack of empirical data, coupled with the cost and scheduling limitations inherent to planning on-site visits, creates a disadvantageous way to accurately and efficiently assess CRA skills-based monitoring competency across an organization.

CRAA’s method for evaluating CRAs involves use of their web-based monitoring simulation designed to replicate the CRA’s experience at an investigative site. Within the simulation, CRAs are asked to complete monitoring activities as they normally would at an actual clinical research site. The simulation itself contains issues (or “Findings”) that are embedded into the site documents and study drug and span across eight skills-based monitoring competency “Domains”. The CRAs are tasked with creating monitoring notes within the simulation system to capture the Findings identified during review of the simulation documents and to provide guidance to site personnel regarding follow up actions. Once the simulation is completed by the CRA, the monitoring notes are assessed (“scored”) in an objective manner by two independent scorers (former CRAs and/or Auditors) to determine which Findings within the simulation were correctly identified. Any discrepancies are adjudicated by a third scorer. The results are then reported to the respective clients via an Admin Portal.

Methodology

CRAA’s data sample consisted of assessments conducted on 579 unique CRAs from a global CRO located across four global regions: (a) North America, (b) Europe, Middle East, Africa (EMEA) (c) Latin America and (d) Asia-Pacific. Additional collected data included CRA title, years of experience, number of monitoring notes, time to complete CRAA simulation, and % scores for assessments in the following Domains: (1) ICF Process, (2) IRB/IEC Reporting, (3) Protocol Requirement, (4) IRB/IEC Submission/Approval, (5) Source Documentation, (6) Source to EDC/EDC, (7) Potential Fraud, and (8) Delegation of Authority.

The sample selection methodology consisted of ensuring that the data most appropriately reflected global trends from the simulation, and computer data science algorithms from Python were used to select the data for analysis. Of the 579 samples, 233 were from North America, 225 from EMEA, 79 from Asia-Pacific, and 42 from Latin America. Additionally, of these 579 samples, 429 had a CRA title provided (while 150 did not), with 177 from North America, 165 from EMEA, 48 from Asia-Pacific, and 39 from Latin America. When looking at the samples by experience levels, 282 had the title of Sr. CRA, 78 had the title of CRA II, and 69 had a title of CRA I. An average score was created for each CRA by averaging the scores from each of the 8 domains; this score was used in all analyses conducted in this article. Figure 1 shows the varying years of experience for each CRA title. 

Figure 1: Years of Experience by CRA Level (N = 429)

Results

Figure 2 shows that all CRAs exhibited similar average scores (inclusive of all domains), regardless of CRA title; CRA I’s assessment scores averaged 64%, CRA IIs averaged 59%, and Sr. CRAs averaged 60%.  The average score for all CRAs was 60%.

Figure 2: Average CRA Score by CRA Title (N = 429 with Confidence Intervals, P<0.01) **

Furthermore, when evaluating individual monitoring domains, CRAs in aggregate performed the best in the following domains: ICF Process, Protocol Requirement, and Delegation of Authority (Figure 3). Scores were lower in IRB/IEC Reporting, IRB/IEC Submission/Approval, Source Documentation and Source to EDC/EDC domains. Potential fraud was the lowest domain by far of all eight domains. 

Figure 3: Scores for All CRAs in 8 Domains (N= 579 with Confidence Intervals)

Upon analyzing each average domain score by CRA title (Figure 4), it appears that most CRAs performed equivalently; however, CRA 1s performed best in seven of eight domains, followed by Sr. CRAs, then CRA IIs, confirming findings in Figure 2.

Figure 4: Average Domain Scores by CRA Title (N= 429 with Confidence Intervals)   

Figure 5 delineates that CRAs in Latin America exhibited the highest average scores across all domains with an average of 71%, EMEA averaged 63%, Asia Pacific 61% and North America 57%*.

Figure 5: Average Score Bell Curves by CRA Title in Global Regions (N = 429)*

 

Discussion

The analysis reveals first and foremost that CRA average competency scores were near equivalent regardless of CRA title, when administered an objective monitoring simulation (Figure 2). In addition, there was wide variability within the domains. All domains had at least 1 CRA score 100%, conversely seven out of eight domains had at least 1 CRA score 0%. There were also differences seen by region, with Latin America performing the best followed by EMEA, Asia-Pac, and North America. “The data seen in this sample is consistent with other companies’ data related to CRA title, variability and differences between regions. The data reflects that there are no specific predictors of individual CRA competency,” said Gerald DeWolfe, CEO of CRAA. 

Gustavo Poveda, Chief Operating Officer at CRAA notes, “It is reasonable to believe that the degree to which a CRA is challenged by various data and compliance issues at study sites, the better positioned they will be to perform well when challenged by similar issues once again.” Figure 3 suggests that the variability and lack of predictability in average global CRA scores makes it challenging for any sponsor or CRO to know the CRA monitoring competency levels without administering a consistent and objective evaluation. 

Another important observation is that CRAs are generally underperforming in areas critical to clinical trial data quality and integrity. “The overall average of 60% across all CRAs should not be acceptable in the global pharmaceutical research industry” adds DeWolfe. Specifically, Figures 3 and 4 demonstrate 5 domains in which CRAs’ scored lower than desired competencies: IRB/IEC Reporting, IRB/IEC Submission/Approval, Source Documentation and Source to EDC/EDC domains. Potential fraud scores exhibited very low competency.    

Assessing CRAs via a more standardized, objective approach allows a company to appropriately evaluate monitoring competency. By understanding how CRAs perform (in which domains they achieve desired thresholds and in which domains they exhibit performance deficiencies), sponsors and CROs can focus resources on retraining and refocusing CRAs in critical areas. Because variability is seen in scores, regions, experience, and approaches, companies need a consistent way of assessing their CRA’s and addressing remediation. In a subsequent article, we will describe a process in which sponsors and CROs can not only better detect these areas of deficiency, but also improve CRA competency utilizing an individualized, targeted remediation strategy.

 

* Average scores in EMEA, and North America are more reliable compared to Asia-Pacific and Latin America because of lower distribution and more sample data.  (Figure 5)

** A multivariate ANOVA regression was conducted to evaluate statistical significance.  Dependent factor was average score, independent variables were CRA levels (i.e., presence of CRA I, II and Sr. CRA). Assuming an alpha of 0.05, regression confirmed statistical significance with a P<0.01 value.  (Figure 2)

 

Moe Alsumidaie, MBA, MSF is Chief Data Scientist at Annex Clinical, and Editorial Advisory Board member for and regular contributor to Applied Clinical Trials.

© 2024 MJH Life Sciences

All rights reserved.