OR WAIT null SECS
Jill Wechsler is ACT's Washington Editor
PCORI, methods panel to set policies for comparing drugs and medical products and practices.
A new federal comparative effectiveness research (CER) program is poised to invest some $500 million a year in research on how to prevent, diagnose, treat, monitor and manage disease, an initiative that promises to both enhance the nation's healthcare system and curb unwarranted spending. Medical products makers and some patient groups are concerned that government-funded studies will steer health coverage towards low-cost remedies and away from innovative products that are more expensive. Personalized medicine advocates are pressing for CER to consider treatment effects on patient subpopulations, including minorities, children, the elderly, and individuals with uncommon health problems—not just what works in the average patient. Yet, designing studies able to detect such differences is tricky and may increase the scope and cost of research.
The government met a September 23, 2010 deadline for naming a board of governors for the new Patient-Centered Outcomes Research Institute (PCORI), as required by the Affordable Care Act (ACA) enacted last March. Now the panel has to set the national CER agenda, establish standards and methods for CER studies, and develop programs to disseminate results to practitioners and the public. Gail Wilensky, Senior Fellow at Project Hope, says she's "cautiously optimistic" about the progress so far, but acknowledged at a recent CE Summit in Washington that the program remains a "very fragile concept" that "a lot of people still want to torpedo."
CER research has been going on for years, as payers and public health agencies have sought to identify more effective—as well as inappropriate and harmful—medical therapies and practices. These initiatives have focused on evaluating the costs and effectiveness of new drugs and medical technology, as with the United Kingdom's National Institute for Health and Clinical Excellence (NICE), the Blue Cross/Blue Shield technology assessment program, and the University of Oregon-based Drug Effectiveness Review Project (DERP), which evaluates medicines for state Medicaid programs and other payers.
PCORI Articles and Conferences
In establishing the Medicare drug benefit in 2003, Congress authorized an expanded CER program at the Agency for Healthcare Research and Quality (AHRQ) to produce more systematic reviews of treatments for Medicare beneficiaries. The federal stimulus legislation enacted in 2009 (American Recovery and Reinvestment Act, or ARRA) dramatically advanced federal support for CER by providing $1.1 billion for the Department of Health and Human Services (HHS) to set priorities and support CER projects and infrastructure development through AHRQ and the National Institutes of Health (NIH).
This year's health reform legislation built on ARRA by establishing PCORI as an independent, non-profit organization with resources to support clinical trials and outcomes studies to compare treatments for common medical conditions. By 2014, this non-governmental institute is slated to have a $500 million annual budget, funded largely by a 1 percent tax on health insurance premiums—a strategy designed to insulate the program from the annual Congressional appropriations process and to provide more stability and predictability.
The PCORI board members, who were announced by the Government Accountability Office (GAO) in late September, represent payers, providers, patients, and industry, with an emphasis on women and minorities. Francis Collins, Director of NIH, and AHRQ director Carolyn Clancy are on the panel, along with three representatives of pharmaceutical and medical device companies.
Nearly as important as naming the PCORI board is GAO's assignment to establish a PCORI methodology committee to define appropriate CER study designs and methods. The panel will include representatives from NIH and AHRQ, as well as academics and industry scientists, and will weigh criteria for study validity, generalizability, feasibility, and timeliness in a report to Congress that is due in only 18 months. Selection of appropriate comparators and determining strengths and weaknesses of observational studies vs. randomized controlled trials (RCTs) are key tasks, as is the goal of linking the many diverse guidelines and methods set by regulators and various payers through common definitions for the many CER activities.
An important consideration for sponsors is whether the demand for more information on how drugs and clinical treatments work in real-world settings will expand the scope of data needed to bring new drugs to market. The Food and Drug Administration does not require comparative or cost information to approve a new therapy, although some foreign regulatory authorities do so. Many sponsors now include comparative and clinical-use measures in preapproval trials to meet payer demands, and decisions on advancing from Phase II to Phase III studies increasingly weighs the feasibility of generating evidence of product value during development. Adoption of CER research standards by PCORI is likely to shape study processes and standards for pharma and the broader research community.
A hot-button issue is whether valid comparative information can be provided through reviews of existing studies and meta-analyses of known evidence, which generally are less costly and can be done faster than RCTs. Large, long-term observational studies that follow patients over several years are more feasible with the growth of health system databases that can provide patient treatment information both retrospectively and prospectively. And FDA is requesting more postapproval outcomes studies to track safety and efficacy over time.
However, Robert Temple, Deputy Director for Clinical Science at the Center for Drug Evaluation and Research (CDER), has long raised concerns about relying on comparative studies to document drug efficacy. He explained at the Drug Information Association annual meeting in June that it's hard enough to detect differences in efficacy between a test drug and placebo even in a well-controlled RCT, and even more difficult to compare multiple treatments or to demonstrate product superiority. Superiority studies are often clouded, Temple said, by designs with too-low comparator doses, less healthy patient populations, and biased endpoints, and that a lack of randomization and potential for bias in meta-analysis makes such analysis "treacherous." Hans-Georg Eichler, Senior Medical Officer at the European Medicines Agency, was more positive about the value of outcomes studies and described a push by regulators to require more "relative effectiveness" data for sponsors to obtain market authorization. If regulatory and payer pressures prompt more observational studies, though, Temple wants FDA to help set the standards for doing them correctly.
A related concern for CDER Director Janet Woodcock is that the demand for more RCTs to address clinical effectiveness will tax the capacity of the nation's clinical research system. Pharma companies are conducting more and more clinical trials outside the United States because of a lack of investigators and study subjects at home, Woodcock explained at a CER briefing sponsored by the Center for Medical Technology Policy (CMPT) in July. To evaluate how therapies affect patients in the real world, "we need to enable community doctors to join the research enterprise," she advised, acknowledging that a complex consent process, privacy issues, and inadequate information technology systems discourage such a move.
Whether more comparative research will actually limit healthcare spending remains to be seen. Analysts project that PCORI will reduce federal healthcare spending by about $3 billion over 10 years—just about what the government will spend on the CER program. And that calculation assumes that more comparative information will lead to changes in physician practice and in patient choice.
Potential savings are curbed further by Congress' stipulation that Medicare cannot use CER to establish cost-effectiveness thresholds, set practice guidelines, or make coverage or payment recommendations, as done by NICE. Even so, private insurers and payers are free to tap CER evidence in their coverage decisions, as they have done for years, and more outcomes studies will support efforts by payers to negotiate lower prices and steer consumers to more high-value care options.
At the same time, efforts to limit or curtail treatment choices will remain difficult and will require a very high threshold of evidence. "CER is not a panacea or a silver bullet," stated Kavita Patel, Director of Health Policy at the New American Foundation, at the CER summit. She and others advise PCORI to identify some "low hanging fruit" on the CER priority list that can demonstrate the value of comparative research to the public in a short period of time.
Jill Wechsler is the Washington editor of Applied Clinical Trials, (301) 656-4634, firstname.lastname@example.org