OR WAIT null SECS
Analysis of over 330,000 Phesi trial protocols shows predicting future enrollment performance by extrapolating data is flawed.
It is common, and many would assume logical: if the enrollment performance data from an investigator site is known, particularly if there is a lot of this data, one should be able to predict future enrollment performance by extrapolating from this data. Phesi analysis of over 330,000 trial protocols held in our global database shows this approach is flawed.
An investigator site’s ability to enroll patients is dependent on the protocol design. For instance, a sponsor designed 2 multiple sclerosis trials; one targeted female patients only and the other included both male and female patients. Note that 75% of multiple sclerosis patients are female, so we assume that the sponsor thought that with enough sites, there would be no issue enrolling the female-only trial. This trial encountered so many challenges, that the protocol eventually had to be amended to include both male and female patients. The damage was done. It took almost twice as long to enroll fewer patients for the female-only study. The Adjusted Site Enrollment Rate (ASER) was 0.33 patients/site/month, compared to the second trial, which included both men and women from the outset and enrolled 1.44 patients/site/month.
An investigator site’s ability to enroll patients also depends on how well sponsors and their chosen CRO activates them. Sites activated toward the end of the enrollment period, which is typical during a ‘rescue’ situation, are less likely to emerge as top enrollers. While trial rescues, including the unplanned addition of sites, creation of protocol amendments, and running recruitment campaigns, is painfully common in clinical research, it is very costly, and hurts the relationship between investigator sites, patients, and clinical development organizations globally. It is important to point out that some of the best performing sites consciously avoid being involved in such trial rescues. The important point is that today there are structured predictive analytics approaches to minimize, or even eliminate disruptive, frustrating, and costly “rescue missions”.
Another key consideration is that enrollment performance of investigator sites from different countries can be markedly different. An experienced and consistently high performing investigator site in the USA might only enroll 1/3 or less patients than their counterpart, say in Poland, in the same trial. The complication of country allocation and the level of sophistication needed to solve that problem, is worthy of discussion in a separate blog. Simply to say that it is risky to compare enrollment data between investigator sites from different countries.
It is also risky to think that historical site enrollment results are directly correlated with patient data quality levels. An investigator site can enroll a large number of patients, but the site’s ability to provide reliable data for the sponsor to evaluate the safety and efficacy profile of a clinical development candidate, are two different things.
Make no mistake, historical investigator site level enrollment data are valuable, but only as one piece of a complicated puzzle. Investigator site enrollment performance is dynamic, and there are so many moving parts (variables) attached to it. What we have described here are only a few examples of these many variables. The efforts of AI and analytics companies over the past decade have not only begun to quantify these variables, but also have been revealing the complex relationships among this very large number of variables. Complex algorithms and increasingly large sources and amount of data allow us to better understand the interrelationships and more accurately model the design of the development plan and the protocols and the selection of the investigator sites and make accurate predictions of site performance. Thus, we can now avoid the error prone approach of extrapolation.
Gen Li is the President of Phesi.
Gen Li, Lauri Sirabella, 2010. "Planning the Right Number of Investigative Sites for a Clinical Trial." The Monitor. 2010; 24(4): 54-58.