Precision Medicine Deserves Precision Trial Optimization

Article

Lagging indicators and superficial averages are increasingly replaced by sophisticated performance metrics and predictive analytics harnessing the power of big and small data.

“Sometimes complex problems don’t have easy solutions.”

Bill Bradley, Hall of Fame basketball player, Rhodes scholar, and former three-term U.S. Senator from New Jersey

 

There’s no denying that the drug development industry prays at the altar of data. So here’s the data that troubles me the most. According to CenterWatch, there has been no improvement since 2000 in the ability of our industry to deliver clinical trials on time and on budget while the cost to develop a single product has skyrocketed.

This failure-and it is a failure-comes despite the fact that the same time period saw an explosion in outsourcing to save money, billions of dollars per year in eClinical technology spending to improve efficiency, offshoring of trials to low cost regions, the launch of risk-based monitoring and adaptive trials, site centricity, and patient centricity among other well-meaning and often positive efforts. Nonetheless, despite all of this, nearly 90% of trials still fail to meet milestones and the cost to develop a single drug has risen to a virtually unfathomable $2.6B.

Let’s not forget what this failure means: fewer, more expensive treatments delivered later for those in need.

Therefore, what it also means is that it’s time for the industry to stop hoping for a silver bullet to the challenge of trial optimization. There’s no more likelihood of that than there is discovering one drug that will cure all that ails us. This is because clinical trials are complex services subject to a myriad of influences. Notwithstanding outward similarities-therapeutic areas, endpoints, phases, and the like-each trial is unique as the proverbial snowflake. They occur at different times, in different places with different personnel, patients, protocols, designs, etc. They are subject to ever-changing laws and regulations not to mention the fast moving cultural norms of the 21st century. There are hurricanes, blizzards, regime changes, and other unpredictable events that have a real impact on the ability of sites and patients to follow even the best protocol. Is it really so surprising that there’s no cure-all-no wonder-drug or “blockbuster”-to improve clinical research performance?

But there is a better way. Just as we’re moving away from blockbuster drugs to “personal” and “precision” medicine, we need to move away from thinking there’s a magical solution to trial performance woes and to “precision trial optimization.” This means understanding the performance and relative importance of literally hundreds of factors underlying each trial and identifying not simply “best practices” but “best practitioners.” Trials should be run by organizations and people who can best respond to the inherent yet ever-changing complexities of the critical work done by the industry. 

To accomplish this requires measuring the underlying service quality driving trial performance using a predictive analytics methodology. Traditional operational metrics only measure time and quantity, and consistently leave one asking “why.” They also lack validity across trials that are inevitably different in meaningful ways. Benchmarking to a consensus mean or an internal mean is difficult, confusing, and often misleading. It doesn’t tell you why enrollment is lagging; why the query rate is high; why case report forms are not completed on time. Was it training? The CRA? The forms? Poor communication? The protocol? Without knowing “why” there is too much guessing at the “what” and the “how” of faster, less expensive trials. Understanding “why” will reveal “how.”

Yet we continue to rely on operational metrics because it’s easy and natural. After all, we’re a product-focused industry. Also, the data is increasingly available thanks to the billions spent on eClinical solutions. But while that data is needed and useful in many ways, it’s what drives those key performance indicators-the underlying services-that must be measured rigorously to provide valid and reliable insights.

So why isn’t this happening? Because it’s hard. Because it’s different. Because it’s never been done before. Properly done, service quality measurement asks the industry to fairly consider the needs, interests, and perspectives of all stakeholders. That can only be done by collecting more data-not just playing with the data that exists.

And to that challenge, I say “so what.” Retailers from Amazon to Anne Taylor, professional sports, and politicians have all embraced more sophisticated measurements. Lagging indicators and superficial averages are increasingly replaced by sophisticated performance metrics and predictive analytics harnessing the power of big and small data. In our own industry, pharmaceutical and biotechnology companies have committed billions to similar approaches in the drug discovery area. Isn’t it time an industry devoted to saving and improving lives does the same for clinical operations? 

Peter Malamis is CEO of CRO Analytics, Inc.

© 2024 MJH Life Sciences

All rights reserved.