The Heavy Burden of Protocol Design

May 1, 2008

Applied Clinical Trials

Applied Clinical Trials, Applied Clinical Trials-05-01-2008, Volume 0, Issue 0

More complex and demanding protocols are hurting clinical trial performance and success.

For two decades now, companies sponsoring clinical research have openly acknowledged that protocol design negatively impacts clinical trial performance and may well be the single largest source of delays in getting studies completed. But the magnitude of the problem and the challenge of re-engineering how protocols get crafted have squelched the will to attempt a fix. Investigative sites are instead admonished for sluggish enrollment and rising dropout rates.

Kenneth A. Getz

The real issue is that many patients no longer qualify for or simply lack the stamina to participate in the kinds of studies being rolled out by drug development companies today. In terms of procedures and their frequency, protocols are more demanding than ever before. That represents a growing burden to study subjects in terms of their time and comfort. It also places a greater workload on investigative sites for fewer relative dollars.

Design Changes

Key protocol design elements changed dramatically between 1999 and 2005, as was shown in a recent study conducted by the Tufts Center for the Study of Drug Development and soon to be published in an upcoming issue of the American Journal of Therapeutics. More than 10,000 unique Phase I–IV protocols formulated by over 75 pharmaceutical and biotechnology companies across a wide range of therapeutic areas were scrutinized.

Figure 1

During the six-year period, the number of unique procedures per protocol (e.g., routine physical exams, blood work, and heart activity assessments) and their frequency increased at an annual rate of 6.5% and 8.7%, respectively. In 2005, across all therapeutic areas and research phases, 158 procedures were conducted—an average of 4.5 times the number of unique procedures—during the course of a clinical trial. Unique procedures were conducted on average 5.4 times during the course of a Phase I clinical trial, 6.5 times during a Phase II trial, 4.0 times during a Phase III trial, and 3.1 times during Phase IV trials.

The Tufts study found that questionnaires and subjective study volunteer assessments were the fastest growing procedure type. While these procedures may provide valuable information, they also require a higher level of volunteer compliance.

The typical protocol today has nearly 50 eligibility criteria. Whereas the number of exclusion criteria per protocol remained stable over the time period measured, the number of inclusion criteria saw nearly a three-fold increase.

Performance impact

Study conduct performance on 57 individual Phase II and III protocols investigating chronic illnesses was examined and, predictably, performance worsened virtually across the board. The study controlled for geographic speed differences. The average overall duration of clinical trials increased 74%. Median cycle time from protocol readiness to both drug availability and last patient/last visit increased by a similar margin, as did median elapsed time from protocol readiness to data lock.

Enrollment rates for volunteers who met the rising number of protocol eligibility criteria dropped from 75% to 59% between the 1999–2002 and 2003–2006 time periods, while retention rates fell from 69% to 48%. Patient enrollment cycle times increased for protocols conducted in the latter time period. The average length of case report forms grew from 55 pages to a whopping 180 pages per protocol. Consent forms also lengthened. Moreover, there were higher numbers of protocol amendments and a dramatic rise in observed adverse events and severe adverse events. This last observation may be due in large part to changes in how adverse events are defined and counted.

Without a doubt, clinical research professionals put a great deal of care and attention into their study protocols. Yet these same protocols appear to lack adequate consideration for the human subjects participating in them and the investigative sites that must administer them.

Burdening sites

For investigative sites, rising procedural volume is compounded by rising work effort required to administer most procedures. Tufts CSDD created Work Effort Units (WEUs) based on Medicare's well known Relative Value Unit (RVU) methodology, which calculates payment to physicians based on the estimated value of their time and expertise to administer medical procedures. Procedures not assigned a Medicare RVU were assigned a WEU established by a panel of physicians at the Tufts University School of Medicine.

Based on WEUs, the Tufts CSDD study found that work load on sites increased by 10.5% annually between 1999 and 2005. The burden on investigative sites to administer protocol procedures is rising at a faster rate than the number of unique procedures and their frequency. For Phase I trials, investigative site work burden mushroomed by 14%. Ophthalmology, oncology, and pharmacokinetics protocols place the highest work burden on sites at this time. Work burden has increased most in gastrointestinal, hematology, pain, and anesthesia studies. Yet grant funding per procedure declined over the Tufts CSDD study period by 8% annually.

The results of the Tufts study undeniably suggest that the current and intense focus of sponsor companies and CROs on micromanaging investigative sites to drive better study conduct performance is to a large extent misplaced. Until the spotlight is shone on the protocols themselves, and the internal processes to design and approve them, the impact of performance improvement strategies will be marginal at best.

Improving the Process

A better way to accelerate clinical trials begins with sponsor companies benchmarking their own protocol designs against those gathered by the Tufts Center analysis of 10,038 protocols. Doing so would give sponsors a starting point for identifying, balancing, and prioritizing what is really necessary to have in a protocol and what is merely nice to have.

Alternatively, companies could look at the protocols they're now developing and apply the Tufts CSDD WEU methodology to assess the work load those protocols will create for investigative sites. That would go a long way toward raising awareness that every protocol decision has a downstream effect on speed

and efficiency.

As companies that have peeked at these problems can attest, design elements are often brought into protocols because influential staff will champion them or because they ask scientifically intriguing questions. They don't necessarily meet a regulatory obligation and might not even be a good idea.

The current protocol design process clearly needs attention. The incidence of protocol amendments continues to rise, adding to an already cost intensive activity. Self-reported estimates to implement a single amendment can run a sponsor company anywhere from $250,000 to $450,000, and biopharmaceutical companies estimate that they make an average of three amendments for each Phase I protocol and three to five amendments for each Phase II and Phase III study.

Assessing the feasibility of protocol designs—with a heightened sensitivity to the impact on investigative sites to administer them and on patient willingness to volunteer for and stay in them—is a critical step that must be taken if sponsor companies hope to achieve sustainable drug development efficiencies. Today's safety sensitive operating environment, coupled with the changing nature of therapies in the pipeline, will encourage yet more growth in the number and frequency of procedures per protocol. But the notion that more is better must be challenged for its scientific value and its impact on performance. The burden of protocol design is mounting. The price of doing nothing may have reached its pinnacle.