OR WAIT 15 SECS
Pfizer has created a clinical trial modeling tool for mitigating study risk during the protocol design and study execution phases. Jonathan Rowe speaks with Moe Alsumidaie on the purpose behind these predictive models.
Pfizer has developed a clinical trial quality risk predictive modeling tool in order to mitigate study risk during the protocol design, and study execution phases. We had the opportunity to interview Jonathan Rowe, Executive Director, Head of Clinical Development Quality Performance and Risk Management at Pfizer to elaborate on these predictive models. Moe Alsumidaie: Can you describe the purpose of the predictive models that Pfizer has developed? What are they meant to accomplish? Jonathan Rowe: There are quite a few models in the GCP quality performance space we have developed and continue to refine. One relatively straightforward model is a correlation model, where we correlated our clinical trial process performance to select outcomes of GCP as defined in ICH E6. Examples of outcomes of GCP could be factors such as, are patients consented properly or are the rights, safety and wellbeing of subjects being protected. We take those GCP outcomes and in conjunction with the clinical trial quality metrics that we collect, build correlation models to see if any of those metrics can predict if we are going to have issues achieving the GCP outcomes. Moreover, we integrate time scale into the models, so a study team might be able to predict a few months in advance the likelihood of a GCP issue. It is an early warning system, so that we can ensure that our GCP outcomes are good. This model was built using many dozens of existing clinical trial process quality metrics that we have in the clinical development space and assessing their relationship statistically within and across a large cohort of studies.
Another, more complex model, was built to aid Pfizer in predicting the risk of certain quality events in a clinical trial. We recently began running this model on our new protocols in order to build our understanding of quality risk areas and proactively mitigate quality risk. We call the model, appropriately, the study risk prediction model. For this model, we began by analyzing quality event data such as normalized protocol deviations, significant quality events, and protocol amendments, from 406 studies, and searched for relationships with information we collect at the beginning of protocol design. Pfizer clinical trial teams are required to perform an examination of study quality risk by, among other things, thoughtfully reviewing a database of possible study risks, called the question bank, and then establishing mitigations to those risks. The question bank is a part of our integrated quality risk management planning (IQRAMP) for protocols. These questions can range from addressing straightforward study risks, such as ‘is the mechanism of action novel? Is the study going to be multinational? How many exclusion criteria are there?’ to questions about more complex risks. The study risk prediction model statistically correlated quality events to how questions in the question bank were answered, in conjunction with study attributes. What we are trying to accomplish is improved quality risk planning. Obviously, the more you know about your risks, the better you can mitigate them upfront. This type of analysis may be the first in the industry and we are using it right now to integrate it into our development process. MA: How do those predictive models impact study team decisions during protocol design? JR: Since we have the question bank, and we can correlate how those questions are answered to quality outcomes, a study team can review their protocols and take action to reduce the risk of quality events by either altering a protocol component or through mitigation planning. For example, some protocols may inherently have quality risk because of complex dosing, and that dosing regimen must persist. The study risk prediction tool and question bank will allow the team to thoughtfully and proactively mitigate and be more vigilant in the high risk area in order to reduce mistakes, deviations, etc. We hope to continue to drive our goal to reduce quality events. A near term vision of the study risk prediction model is to use it to support appropriate monitoring. For example, if a study is predicted to be high risk, the team may want to be appropriately diligent in monitoring, whereas a low risk study could mean more of a risk-based approach. Understanding risk allows for better resource planning. MA: What were some of the challenges you faced when developing these models? JR: Some of the modeling challenges come from different paradigms or processes that may have been used in earlier studies. Perhaps something that was a risk years ago is not a risk now. Conversely, new risks are identified that may not have been in our risk bank in the past. The models have to be based on studies that are reflective of modern trial operations and infrastructures. We are continuously renewing the model, which requires much thought and validation. Doing these updates ultimately generates a much better model. We currently have attained an 80% accuracy in predicting clinical trial quality issues, and as we continue to add more studies and refresh the model, we expect the accuracy to improve to 85%. We will never get to 95% to 100% but if we get to 85% to 90% accuracy it will significantly improve trial quality performance predictability. MA: Do you expect to see something like this going through a data sharing initiative (i.e., via TransCelerate)? JR: Sharing the approach is not an issue. The issue is we don’t know if our results can be translated to another company if that company has different processes for executing clinical trials. In our model, we use not only the question bank data, but we also introduce study attributes, up to 90 different variables, in the model. If our clinical trial process is different than another company’s, or we collect variables that are unique to us, the results of the model may not translate; every company is unique. MA: Pfizer is a large enterprise and has access to many studies. Is this model also appropriate for smaller pharma companies who have much less data, say less than 10 trials? How can those enterprises access the capabilities similar to the one that your team has developed? JR: Having more data gives you more statistical power. If you have 10 studies completed, because you want to be able to tie quality outcomes back to risk planning, the approach might work if you also have effective clinical trial quality risk management processes. This model ties together effective planning with realized quality outcomes. We used 406 studies across a number of therapeutic areas and when we optimized our models per therapeutic area, we used a perhaps a fifth of that data, and still generated a very good power to achieve the kind of capability we were looking at. In summary, this initiative isn't necessarily about protocol optimization, although it definitely gets people thinking about it. It is really about mitigating GCP quality risk. This allows us to understand which trials are risky and it allows teams to say, ‘I know what my risks are and these are the mitigations that I will put in place in order to ensure quality in the study.’