Applied Clinical Trials
The question is, how much attention should be paid to historical recruitment figures if the team only reviews the data from one (or two) similar studies?
Whenever a new clinical study is in planning stage, the operational teams are busy trying to locate recruitment data from similar previous studies to project the number of sites needed, forecast study timelines, and make other seminal decisions. It goes without saying that having the reference data always helps and this is definitely a good benchmark; however, there are several considerations suggesting that the historical figures should be taken with a grain of salt.
The problem is, trials are never identical. Apart from well documented differences (inclusion/exclusion criteria, study design, study footprint, etc.) there are less tangible, but no less important, considerations.
The Role of Sites
One of the most crucial factors in achieving rapid subject recruitment is having motivated sites. It is easier said than done to achieve this goal. The ops team must secure a beneficial combination of agreeable study design, appropriate grant amounts, low competition for time and resources from rival projects, as well as a host of other factors. Site CRAs can also have a tremendous impact on overall site enrollment.
Furthermore, the sites are becoming increasingly sensitive to any logistical/administrative hiccups in clinical studies and this may severely impact team motivation as well as subject recruitment rates. As a real life example, in a recent oncology study a central histology lab had been utilizing very low threshold values according to instructions from the study sponsor, so that even minute receptor expression could be flagged. This increased sensitivity resulted in multiple discrepancies with local lab reports and since the PIs had no reason to distrust their local colleagues, study teams were having heated discussions about the credibility of reports generated by the central lab. During the study close out meetings, many KOLs commented that they wouldn’t bother screening in any future study utilizing the same central laboratory, so the recruitment rates from this recent study can’t be taken at face value. Long story short, if a given site was motivated yesterday and recruited well, it does not automatically mean that the same will apply tomorrow, even for a sister study.
Indeed it is of utmost importance to have the right mix of study sites to ensure that the trial is successful, thus both the study sponsors and CROs are working hard to attract and retain top recruiters, utilizing various strategies. For obvious reasons, top recruiting sites will gravitate more towards successful companies, oftentimes entering into preferred provider relationships. The data obtained as part of this collaboration may prove to be difficult to be extrapolated to prospective trials utilizing a different site relationship model. Also, as a side note, the site which is more motivated than the others will tend to encourage prospective subjects to classify their symptoms as more pronounced in order to become eligible and join the trial, so high recruitment rates in this relationship model might be associated with questionable data.
Study start up times can also impact a site’s ability to enroll patients in clinical trials. If there are delays in the time it take for that site to be initiated and ready to start enrolling patients then any patients lined up for that protocol may/will be enrolled in whatever else is ongoing at that site.
When Your Reputation Proceeds You
The other consideration which is becoming more and more important nowadays is neuromarketing and creation of high expectations for the study drug. In an increasingly competitive market, pharmaceutical companies frequently go out of their way to promote a potential blockbuster as early as during Phase I. This boils down to more aggressively marketed trials being more successful in recruiting subjects, all other assumptions being equal. The flip side of aggressive neuromarketing is having a more pronounced placebo response as it was shown in several recent studies, but that is a different story.
The Impact of Follow Up
Route and frequency of drug administration will be another key factor in predicting recruitment rates. Indeed, even for a very promising molecule, being injectable is undesirable for many prospective patients who might opt to stay on moderately effective oral therapy instead and never join the study. In a similar vein, it has been shown earlier that placebo administered twice daily will turn out to provide a more significant relief than the same placebo administered just once. Thus more frequent drug administrations during a clinical trial may inflate response rate, create too optimistic expectations towards the study drug after a few early subjects have shared their experience, and indirectly promote recruitment rates in the second half of the study. These days, the study subjects do share their study experiences as soon as they leave the hospital, and there are multiple online platforms available to host this kind of feedback and keep the community well informed. With that in mind, the trials with more frequent drug administration (up to a point) may recruit more patients for this sole reason.
Seasonal factors may prove to be crucial for a clinical study to recruit well, too. For example, in any given trial spanning across European sites, screening rates in September-October will almost inevitably be higher than in July-August due to vacation schedules and even summer hospital closures in some southern countries. End of the year is quite often a dead season for majority of “classic” study countries, and similar seasonal patterns may be found pretty much everywhere on the globe. In other words, the same EU study being open for recruitment in September may well have shorter screening period than the study open in July.
Overall, if the study teams have the access to more historical data for a given indication (or country), all above considerations are more balanced out and future projections will be more reliable. In the recent few years however we see an increased number of trials targeting rare diseases, and by definition the historical data in this setting is quite limited. The question is, how much attention should be paid to historical recruitment figures if the team only reviews the data from one (or two) similar studies? Any decisions based on this limited information may turn out to be very costly at the end of the day so it may be advisable to treat reference data as is – a reference, but not a guidance and by no means a predictor of future performance.
Victor Muts, MD, PhD, Associate Project Director, Project Management, Syneos Health