Effective Comparisons: Baseball and CER

Article

Applied Clinical Trials

Applied Clinical TrialsApplied Clinical Trials-08-01-2010
Volume 0
Issue 0

The importance of comparative effectiveness research and how to overcome its challenges.

For some of us, the heart of the summer brings to mind the desire for cold drinks, warm books, Arcadian vacation sites, and baseball. Now while baseball is an antiquated nineteenth-century sport to some and often a pretty dull one at that, it holds a special attraction for the statistically-inclined—even those in clinical research—since it's such a rich source of precise data about what players and teams have done in the past under specific circumstances.

In many ways, baseball is all about data, which analysts like the venerable Bill James (not to mention many more real and amateur statisticians—aka fans) study at great length. These data help us imagine whether our team has a ghost of a chance this year; whether our favorite players are on the upswing, plateau or denouement of their careers; and eventually lead us to determine the effectiveness of team management in putting together the best teams for the money. Ultimately, it's this access to data that allows us to justify or at least rationalize the high cost of game attendance (not to imagine the often higher cost of passionate team loyalty).

Of course, I carry a particular bias in this regard since I live in Chicago, home of the lovable, tragicomic Chicago Cubs. Here we absorb the hope and promise of our baseball statistics with a healthy dose of fatalism. Cubs fans revel in the Wrigley experience—but it's best we don't look too closely to the comparative effectiveness of the Cubs payroll to win ratio vs. that of the competition. Then again, at least the data is there for us, even if we choose to ignore it while dreaming wistfully of next year.

Understanding CER

And this brings me to the timely topic of Comparative Effectiveness Research (CER). Like baseball, CER is all about data. In this case, it's the study of health data to identify which therapies may provide the best outcomes.

To be more exact, the Department of Health and Human Services defines CER as "the conduct and synthesis of research comparing the benefits and harms of different interventions to prevent, diagnose, treat, and monitor health conditions in "real world" settings...about which interventions are most effective for which patients under specific circumstances."1

For many reasons, CER has long been a taboo topic in America, where many of our political leaders and their innumerable influencers somehow feel that exposure to such information will inevitably lead to the rationing of health care and the curtailment of personal liberties. This is odd—one wouldn't imagine it would be possible to build a very good baseball team if the general manager didn't know the performance statistics and salaries of each of his players.

CER is certainly not a new concept among pharmacoepidemiologists—they've been performing such research for years, though it has not always been easy to translate their findings to bedside except among some pioneering physician advocates of evidence-based medicine. But the audience for this research has changed—now there's substantial involvement by government (in the form of a $1.1 billion dollar allocation as part of the American Recovery and Reinvestment Act), increasing interest among venture capitalists, and even, as Jeff Goldsmith recently put it,2 a burgeoning desire for such practical information by the family's Chief Health Officer, Mom. The unavoidable conclusion is that health care costs can't continue to rise as they have been—we need to get a better return on investment for our health care dollars.

CER is not such a novelty in Europe—the European Medicines Agency has previously stated the importance of conducting active comparator (i.e., CER) trials as part of marketing applications when "an established pharmacological treatment is available."3 And the UK National Institute for Health and Clinical Excellence (NICE) conducts CER research, defines health care standards, and even provides an Internet tool known as NHS Evidence, which is described as "a Google-style device that allows NHS staff to search the Internet for up-to-date evidence of effectiveness and examples of best practice in relation to health and social care."

As discussed by an extraordinary panel of experts including Goldsmith, Michael Rawlins of NICE, Mark McClellan, and many others at a thought-provoking 2010 DIA Annual meeting session on CER,4 we need to get past the misplaced fears that have been an obstacle to CER in the United States—we really can't afford otherwise. And we shouldn't depend on the government alone for this. In the words of the redoubtable Oliver Cromwell, "Necessity has no law."

Challenges of CER

Of course, it helps the detractors that CER is also so difficult to conduct, with results that are vulnerable to criticism and easy to misinterpret, since it's inevitably limited by numerous assumptions and qualifications that may not be transparent to researchers.

For example, observational health care and claims data are subject to numerous confounding factors caused by coding variations, inaccurate or incomplete data, and other limitations and complexities that would be expected for data with the primary purpose of driving reimbursement rather than research. Although we are making great progress in identifying ways to adjust for such factors, we still have a long way to go.

Meanwhile, conducting expensive new randomized clinical trials to address CER is also unrealistic—there are simply too many questions that need to be asked and too little time and money for enough prospective trials. So we have to try to learn from existing data sources. But most existing clinical research data must be viewed through tinted glasses, because each individual study is always based on a highly specialized protocol document, filled with assumptions and constraints that must be interpreted by the analyst and are unlikely to match up across different trials—if available to the researcher at all.

Repurposing such data for CER is not trivial—even if the familiar problems of lack of standardized data structure and inconsistent semantics can be addressed.

What it will take

To conduct meaningful CER, we need good data, a place to put it, and a well-stocked workbench of effective tools to access, process, and visualize the data. We'll need lots of high-quality data from many different sources that is optimized for analysis and represents a broad, balanced population of research subjects. To get the full picture we'll need multiple kinds of data—clinical research data; spontaneous report, observational health care, and claims data; and eventually genomics data, so the data repository has to be nimble to accept many data types and rapidly process data updates. Since any significant results will be subject to intense scrutiny and counter-interpretation, it's essential for the data repository and the tools to have sufficient controls and to support the traceability of data transformations and the reproducibility of analysis results.

And—especially in the case of clinical research data—you also need a whole lot of metadata. With clinical trials, understanding the efficacy data so critical to CER depends on knowing how to handle those protocol-specific variations, constraints, assumptions, and individual statistical judgments made to interpret the data—which today are difficult to reproduce from the data alone.

To understand this essential context, we need to have a representation of this protocol content that can stand up next to the data, and tools that will interpret these qualifications and represent the data in the appropriate context so that it can be reliably used, understood, and explained. At this point we have models such as BRIDG that point us in the right direction, but few real examples of protocols expressed in a structured machine readable form that can provide the full context.

Similarly, we need to understand the context for statistical analyses, which is being addressed by the evolving CDISC Analysis Data Model (ADaM) standard but is still in the early stages of adoption within the industry. And we must recognize that most data that is available to us today simply doesn't conform to the standards and models that are just beginning to mature now. So we have to find easier ways of making legacy data fit these standards and models.

But one thing we absolutely must get over is fear of data. We can't control all the different political agendas or the bias or paranoia of individuals, but we can make sure the best available information and the best possible tools are available to use (with the essential qualifications of maintaining privacy, intellectual property, and security where appropriate). And we can make sure that the research itself is unquestionably balanced and unbiased, which may mean that the bulk of the most reliable CER is performed by the researchers who are perceived as being independent of special interests.

For now, we may start by recognizing the importance of CER, and putting in place the data, systems, standards, and tools we need to conduct it. Which is only possible if we all play ball.

Wayne R. Kubick is Senior Vice President and Chief Quality Officer at Lincoln Technologies, Inc., a Phase Forward company based in Waltham, MA. He can be reached at [email protected].

References

1. National Institutes of Health, American Recovery and Reinvestment Spending Plan on Comparative Effectiveness Research (CER), http://www.im.org/PolicyAndAdvocacy/PolicyIssues/Research/Documents/NIH%20CER%20Spend%20Plan.pdf

2. J. Goldsmith, "Implications of Comparative Effectiveness Research for Health Care Innovation," Session 205, DIA 2010 Annual Meeting.

3. EU Standard of Medicinal Product Registration: Clinical Evaluation of Risk/Benefit—The role of Comparator Studies, Doc. Ref: EMEA/119319/04.

4. London, 21 October 2004, http://www.ema.europa.eu/pdfs/human/press/pos/11931904en.pdf

5. "Implications of Comparative Effectiveness Research for Health Care Innovation," Session 205, DIA 2010 Annual Meeting.

Recent Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.