Data from Global RACT Analysis Reveals Subjectivity

Article

The biopharmaceutical industry is starting to adopt TransCelerate’s Risk Assessment Categorization Tool (RACT) in order to identify risks and plan a comprehensive clinical trial risk mitigation strategy. We recently wrote about the RACT moving to the cloud, and the advantages of using such systems. Some of these advantages include the ability for study teams to evaluate R&D portfolio risks by collecting and analyzing RACT data in aggregate.

The biopharmaceutical industry is starting to adopt TransCelerate’s Risk Assessment Categorization Tool (RACT) in order to identify risks and plan a comprehensive clinical trial risk mitigation strategy. We recently wrote about the RACT moving to the cloud, and the advantages of using such systems. Some of these advantages include the ability for study teams to evaluate R&D portfolio risks by collecting and analyzing RACT data in aggregate. In this article, we will leverage industry-wide RACT data (provided by Cyntegrity) to see what are the most common study-related risks that people choose. 

In October 2015, Cyntegrity launched @RACT, a free cloud-based RACT tool that has been deployed across the industry. Over the span of seven months, the @RACT database collected 2,185 RACT data points from 28 RACTs from about 20 small and large pharmaceutical companies. It is important to emphasize that this data cannot be used as a reference for a particular trial, it demonstrates only the global trends in risk assessments, and may not truly represent actual study risks; it is highly advisable for study teams to evaluate their own study risks by completing their own RACTs.
 

Risk Assessment by Category

If you are new to the RACT, the tool essentially analyzes risk from 13 categories (Figure 1), and helps study teams uncover critical study risks in a simple, structured, and standardized way.

Study teams are required to assess Impact, Probability and Detectability measures from a variety of questions from each category in the RACT, in order to generate a risk score for that category. Figure 2 demonstrates that clinical trial Endpoints, Complexity, and Blinding posed the highest risk scores (in red) compared to other categories.

Factors impacting Study Endpoints include endpoint data collection methods, and whether the study is event or outcome driven. Factors affecting Study Complexity include uncommon procedures beyond usual standard of care, subject burden, Adverse Events (AEs) requiring adjudication, and the number of sites and subjects. Factors impacting Blinding involve blinding setup, assignments, where blinded IP is created, and risk of unblinding.

 

Patterns with Probability, Detectability and Impact

The rating system of the RACT requires study teams to categorize risk in each of the questions by Probability (how likely a risk is going to happen), Detectability (how easy it is to detect a risk), and Impact (extent of damage/consequences of a materialized risk).

 

 

 

Figure 3 illustrates the effect of Probability, Detectability and Impact (risk measures) on average risk scores. Figure 3 shows that Probability has the highest impact on risk scores. However, it is important to delineate two observations; (a) each risk measure has a different effect on risk scores; for example, Low Probability is associated with the lowest average risk score, and vice versa for High Probability, and (b) the slopes differ for each risk measure.

Analytical trends with this phenomenon introduces variability in risk score measurement, and this might be explained by the notion that the structure of RACT design suggests subjectivity in its assessments by allowing study teams to come to their own conclusions and interpretations of certain risks in specific risk categories.

 

Why RACT is Subjective?

One weakness of the RACT is that many answers to the questions are left to the judgment of the individual and if you have two different teams complete the RACT on the same study, you would most likely get two different risk assessment outcomes.

TransCelerate’s RACT was initially developed with the intention for study teams to identify risks at the beginning, or during study set up and during study conduct. RACT was designed with project management requirements in mind, as the categories are focused primarily on study operations. This approach is not wrong, however, it is easy to lose focus on the requirements of a risk assessment that regulators would want to see. To elaborate, when one looks at the safety section, it is clearly visible that the authors have given this a lot of thought, nevertheless, essential questions are missing, such as, "does the protocol contain clear definitions of adverse events and clear guidance on safety reporting?" or "how often does the study team plan to reconcile the clinical trial database against the serious adverse event database?" In addition, no quantitative reference is asked for but assessments are made on the basis of merely subjective – team driven – criteria. Further, questions about disclosure are missing.

There are more examples, which show that project management requirements regarding timelines, meeting endpoints and budget were drivers of this RACT design rather than compliance. Nonetheless, the structured risk assessment does cover many aspects of compliance pertaining to data integrity and patient safety.

An initiative that can standardize risk assessments (integrated into @RACT), was developed by PPH plus (questionnaire “RACT plus”), which includes tangible criteria on better defining Impact, Probability and Detectability for each of questions in RACT. This clarification tool is aimed to reduce subjectivity during the risk evaluation.

Moreover, it is important to consider including fixed analytical weights to each question and category, since certain categories are always more important than others. However, leaving RACT scoring and weighting up to individual teams opens the door to inconsistency and hampers protocol / project to protocol / project reproducibility. Henceforth, the most consistent way for weighting may involve linking the weights to historical successes and failures for each trial, retrospectively. Cloud-based solutions enhance the ability to conduct aggregate assessments.

 

Summary

In summary, RACT provides the first essential step towards the standardization of risk assessments in clinical trials. However, the analysis in this article demonstrates that the structure of RACT questions infers subjectivity. 

Standardized criteria for defining “low”, “medium” and “high” for each risk measure, “Impact”, “Probability” and “Detectability” in the assessments as well as fixed analytical weighting can enhance the consistency of risk assessment outcomes.

While the global risk assessment trend demonstrates that study personnel tend to give higher risk assessment scores to categories, such as “Endpoints” and “Complexity,” and “Blinding,” these risk categories may not properly reflect actual study risks due to subjectivity, and more data is needed in order to see consistency in categorical trends. 

On the way to future perfection, it is important for the industry to evaluate risks that regulators pay particular attention to, such as disclosure, safety reporting, and clarifying AE detection. In addition, a section on outsourcing and vendor oversight would be a reasonable addition, as well as ethics and regulatory questions.

While the RACT has created advancements in allowing study teams to interpret risk, it still has a long way to go and needs to undergo continuous transformation and real-world testing in order to generate consistency in identifying study risk outcomes. 

© 2024 MJH Life Sciences

All rights reserved.