OR WAIT 15 SECS
Kristy Galante, Director Process and Infrastructure of External Alliances at Janssen, recently spoke about a novel vendor oversight model at ExL’s 8th Clinical Quality Oversight Forum, and will expand on the model in this interview.
Vendor oversight models continue their evolution, as biopharmaceutical enterprises move towards data-driven approaches, and collaborative models with suppliers. Kristy Galante, Director Process and Infrastructure of External Alliances at Janssen, recently spoke about a novel vendor oversight model at ExL’s 8th Clinical Quality Oversight Forum, and will expand on the model in this interview. ExL’s 7th CROWN Congress is occurring on January 23-25 in Philadelphia.
Moe Alsumidaie: What are the current challenges with clinical trial vendor oversight?
Kristy Galante: There are several challenges when assessing clinical trial vendor oversight. One challenge is that sponsors often do not take the proper amount of time to strategize about how they want to define oversight and how that links to a contract with various suppliers/vendors; suppliers and sponsors often are not aligned with oversight definitions and methodologies, and this creates a challenge at team levels ranging from miscommunications to lacking performance. Study teams and suppliers have realized imminent challenges in study oversight when clinical trials were not set up with clear deliverables, timelines, standardized metrics, and output. Another challenge is the subjective aspect of vendor oversight; while we can objectively measure key risk and performance indicators, we often find conflicting subjective feedback from study teams or vendors. We appreciate that a data driven approach improves not only vendor oversight, but, also consistency and communication. Accordingly, we have developed an oversight model, so that vendors know what Janssen is looking for, and for Janssen to know what it needs to do to maintain oversight at various levels from the study team level through governance committees.
MA: How is Janssen tackling those challenges?
KG: At Janssen, we have developed an oversight model that incorporates both analytical and subjective measures. In our analytical process, we define leading indicators and key performance metrics in the very beginning of a clinical project, and it is important for the supplier to also know what these metrics are, and find a way to report the metrics to the sponsor (i.e., via database integration from a CRO’s CTMS, study status updates/Excel sheets, etc.). We have also spent time designing an oversight model at the business process level, where various levels of management have oversight of study events with various suppliers and how to act if key risk and performance indicators are triggered. As trial teams go through status reports, we roll them up to the next level, where we merge multiple trials within the same vendor, and produce a dashboard where we can look at all trials holistically to see how the studies are performing. This gives us a higher-level management overview of what's happening at both the trial team and vendor oversight levels, so that we can better manage vendors, and predict study-related issues. On subjective measurement, we have included sections in study status reports to provide both study teams and vendors the opportunity to incorporate feedback on performance; this enables us to supplement analytical measures to better evaluate study oversight and performance.
MA: What KPIs/KRIs is Janssen using for analytical vendor oversight?
KG: Examples include categories like delivery, quality, and enrollment metrics. Other metrics include monitoring visits within a specified window, and quality metrics that are triggered according to the Risk Based Monitoring (RBM) plan. We also do a lot of measurement of site staff training to ensure vendor (i.e., CRO) staff is trained according to their curriculum as well as the additional curriculum provided by Janssen. Additional metrics include eTMF and how the vendor (CRO) is filing documents at the protocol, country and core levels. We also look at open CAPAs, how many CAPAs tend to be overdue. Other indicators include CTMS data entry metrics, protocol deviation rates and data management-related metrics, such as source document verification and query rates.
MA: Has Janssen’s oversight model changed vendor performance? If so, how, and are you able to measure performance over time?
KG: Our oversight model has changed vendor performance in a proactive way. We began working on this oversight model around two years ago. In the very beginning, implementation was a little challenging because we needed to identify and pull data from numerous sources, and to ensure that everyone knew what the data was. Additionally, we decided to let the vendors know what is pre-defined from a central oversight perspective, so they also knew how they would be measured. While challenging initially, the model eventually fostered an in-depth collaboration between Janssen and our vendors. The oversight model was eye-opening to both vendors (CROs) and Janssen; the model identified areas of underperformance, and demonstrated to trial teams that they need to be looking at every system, every aspect and every function to produce the most robust quality and performance-driven trial.
To see if it has been helpful, we have done some trending. We have seen quality and performance scores certainly ameliorate over time. We have also conducted initial assessments on early trends with vendors, where my group (External Alliances) immediately engages the vendor to evaluate performance, and implement actions to improve underperformance. There is an additional aspect of oversight in Governance Meetings between CRO and Janssen where the leading indicator and performance metrics are discussed.
In totality, the oversight model requires that the sponsor and vendor collaborate to continuously improve, with analytics as a basis for directing performance. These results would not be possible without analytics and reporting, as they give us consistent awareness, and enables us to identify specific areas of underperformance, and subsequently direct action.