Applied Clinical Trials
Added information to inform treatment decisions may drive up research costs for sponsors.
Health policy experts are clamoring for more evidence on the value of medical products and procedures and how they compare in safety, efficacy, and cost. Proposals are proliferating to address this need, primarily by establishing a new research entity to produce more information on the value of different health care treatments and services. Such initiatives raise questions about who should conduct comparative analysis, who will pay for it, what standards will apply, and what should be done with the resulting information.
Pharma companies are leery of establishing a multibillion-dollar agency authorized to conduct prospective clinical studies designed to help Medicare and other payers in making product coverage and reimbursement decisions. Sponsors also have good reason to fear that added study requirements could raise new hurdles for bringing new drugs to market.
Yet, the comparative research approach may be preferable to price controls, curbs on health plan benefits or limits on access to new technologies. The stated goal of comparative effectiveness (CE) research is to ensure appropriate and high-quality patient care, but policymakers regard such studies as important tools for containing health care spending. U.S. outlays for health care are skyrocketing, and insurers and government health programs are looking to adopt evidence-based medicine and pay-for-performance models to gain more control over outlays. These remedies require credible information on what works best for treating patients and what new technologies merit higher price tags.
The CE research campaign is moving forward on several fronts. Legislation to reauthorize the Prescription Drug User Fee Act and expand drug safety oversight would enhance the Food and Drug Administration's authority to require postmarket comparative studies on drugs and biologics. Another bill would cancel tax deductions for advertising a new drug unless the sponsor conducts comparison effectiveness research. Senate Finance Committee chairman Max Baucus (D-Mont.) backs efforts to develop a priority list of government-funded comparative clinical effectiveness studies, a proposal that was included in legislation allowing the government to negotiate drug prices but has stalled in the Senate.
Industry associations and advisory boards offer additional proposals. The Medicare Payment Advisory Commission (MedPAC) is recommending in its June annual report that Congress establish an independent entity to sponsor "credible research on comparative effectiveness of health care services" for public dissemination.
America's Health Insurance Plans (AHIP) supports establishing a research entity to evaluate and compare the safety, efficacy, and cost-effectiveness of new and existing medical technologies. The insurers clearly intend for such information to help Medicare and other public and private health programs make reimbursement decisions on new technologies. These and other proposals will be discussed further by the Institute of Medicine Roundtable on Evidence-Based Medicine, established to provide a neutral forum on how to improve evidence and its use.
This debate has accelerated since health policy expert Gail Wilensky of Project Hope made the case for a new multibillion-dollar Center for Comparative Effectiveness in an article published by Health Affairs last November (2006). She envisions an entity with sufficient resources to fund prospective clinical trials on the comparative effectiveness of medical treatments and procedures. This organization would assess alternative therapies and procedures to inform payer decision making, but not address costs directly.
One model for CE enthusiasts is the United Kingdom's National Institute for Health and Clinical Excellence (NICE), a government-funded, independent agency that reviews clinical and outcomes data to compare new medical technologies. The results, which include cost-effectiveness assessments, help national health officials establish clinical guidelines and make coverage decisions, but the process often takes more than a year and can delay patient access to new treatments.
Concerns about the safety and efficacy of Cox-2 inhibitors, arterial stents, and implantable cardioverter defibrillators, among other drugs and medical products, also are fueling the push for comparative assessments. Additional postmarket studies would help identify unsafe drugs, says AHIP, which supports legislation that strengthens FDA's authority to require drug labeling changes and postmarket clinical trials.
The prospect that CE research can help hold down health care spending without broad cost-cutting requirements for drugs and health care has generated support from some pharma companies. Johnson & Johnson executives Kathy Buto and Peter Juhn note in another Health Affairs article that comparative studies can help establish the value of medicines, enable marketers to differentiate products, support expanded use of certain therapies, and even justify more streamlined approvals and premium reimbursement for "clinically meaningful improvements." Information on good clinical choices, they point out, may be the "best antidote" to government-set prices.
The danger is, of course, that effectiveness studies would be used to limit coverage and treatment options to low-cost products. Moreover, data demands could impose duplicative research requirements on sponsors. Prospective studies cost hundreds of millions of dollars and are vastly different from relatively low-cost retrospective data reviews, pointed out health economist Bryan Luce of United BioSource at a March seminar on comparative effectiveness research sponsored by the Center for Medicine in the Public Interest (CMPI).
Luce described the tension between FDA officials and clinicians who insist that data has to come from randomized, blinded, controlled clinical trials, and health care providers and payers who object that such trials do not produce "real world" information as found in observational studies based on registries and claims databases. While health researchers may place high value in prospective data collection, FDA may not regard it as sufficiently valid to support promotional claims by drug companies, Luce points out.
The operation and funding of any new research center could address these issues. Current comparative information often is "incomplete, misleading or misinterpreted," Buto and Juhn observe. Such shortcomings, they say, can be avoided by establishing an effectiveness center that is independent from payers, maintains transparent processes, invites all stakeholders (including manufacturers) to participate, and coordinates research with other government agencies.
One contentious issue is whether a new CE center should be part of the Agency for Healthcare Research and Quality (AHRQ), which currently heads up federal efforts to obtain clinical effectiveness data. Such an initiative would build on AHRQ's network of research centers that review published medical literature and analyze clinical data from studies to support recommendations on effective clinical practice. Even though the Medicare Modernization Act of 2003 boosted funding for AHRQ effectiveness research, the proposed CE center would represent a huge expansion.
Of course, it's not at all clear that Congress will provide even $100 million to launch such an initiative. A Congressional backlash against health technology assessment in the 1980s eliminated Congress' Office of Technology Assessment and nearly killed the predecessor of AHRQ, notes Luce. And the legislators instructed Medicare officials to keep costs and comparisons out of their assessments of new technologies. Medicare's Coverage with Evidence Development policy, designed to obtain additional information on newly approved drugs and medical products, also has drawn complaints about linking new research requirements too closely to reimbursement decisions. CMS is revising the policy to make studies and results more transparent, but the changes raise concerns about Medicare covering costs for seniors in studies.
To fill the CE information gap, a number of organizations have launched technology assessment programs outside Washington. Oregon's Drug Effectiveness Review Project (DERP) reviews trial data on drug therapeutic groups to inform coverage decisions by managed care plans and state Medicaid programs. Consumer's Union uses the DERP assessments for its BestBuyDrugs program. The Academy of Managed Care Pharmacy (AMCP) recommends that drug marketers include such comparative and economic data in the dossiers they submit to formulary committees.
A new CE center would centralize and coordinate these and other research efforts, set standards for comparative analysis, and oversee dissemination of vetted results to providers, payers, and patients. At the same time, ready access to data from large health plans and government programs would facilitate CE study. A growing awareness of great variations in patient response to treatments and the desire to link provider payments to quality measures speak to the need for comparative health care information. Sponsors will want to have a say in how the questions are framed and how the resulting data is used.
Jill Wechsler is the Washington editor of Applied Clinical Trials, (301) 656-4634 firstname.lastname@example.org