A push at the national level to fund more comparative effectiveness research could mean more information for otolaryngologists about which treatments work best for a given condition and in which patients.
Explore this issue:April 2010
The American Recovery and Reinvestment Act (ARRA), passed in February 2009, includes $1.1 billion in funding for this research, which compares the effectiveness of pharmaceuticals, procedures or devices for the same condition. The health system reform bill recently signed by President Obama would create comparative effectiveness research trust funds.
The goal of comparative effectiveness research is to provide physicians and patients with practical information they can use to decide the best treatment option, said Patrick Conway, MD, executive director of the Federal Coordinating Council for Comparative Effectiveness Research. “As a physician, I often don’t have good evidence to guide a patient,” Dr. Conway, a practicing pediatrician, said. “For patients, often the comparative evidence isn’t in existence and so they don’t have the information to make an informed decision.”
Comparative effectiveness research was already conducted by several agencies, including the Agency for Healthcare Research and Quality (AHRQ), but the ARRA increased funding substantially. AHRQ funding for this research, for example, was $30 million in FY 2008. The act provided $300 million.
In June 2009 the council and the Institute of Medicine (IOM) unveiled their priorities for this research, as was required by the ARRA. In its report, the IOM recommended 100 research topics, including the effectiveness of assistive listening devices, cochlear implants, electric-acoustic devices and habilitation and rehabilitation methods for hearing loss in children and adults.
—Patrick Conway, MD
‘Real World’ Research
“There are many areas within our specialty that are ripe for comparative effectiveness research, not because there is disagreement necessarily, but because of the innovations and advances in basic science, pharmaceuticals and new technology in our field that have shown great promise,” said David Witsell, MD, MHS, research coordinator to the American Academy of Otolaryngology—Head and Neck Surgery (AAO-HNS) Foundation. “However, we need to learn more so we can ensure that effectiveness shown in studies links with actual patient profiles seen in practice.”
Both the council and AHRQ have emphasized the need for more cohort studies done in “real world” settings. The idea is not only to get a better grasp of how treatments work, but to learn if some interventions work better than others in certain patient subgroups, such as women, children, minority patients and people with disabilities, Dr. Conway said.
Involvement in a clinical research network, such as Creating Healthcare Excellence through Education and Research (CHEER), is one way otolaryngologists could prepare for and participate in comparative effectiveness research, said Dr. Witsell, principle investigator of the National Institutes of Health-funded CHEER grant. The network focuses on practice-based clinical research in hearing and communicative sciences.
Some critics argue that comparative effectiveness findings will cause access trouble for groups of patients for whom the best treatment is typically less effective in the general public. The opposite is the case, Dr. Conway said. The goal is to include patient subgroups in studies and present differences in treatment effectiveness between patient populations in the findings, he said. The House and Senate health reform bills mandate that the research take into account patient subpopulations and communicate effectiveness differences in the findings.
“A treatment that works well for a small group of patients with identified characteristics would still be seen as an effective option,” Dr. Witsell said.
Some critics claim that comparative effectiveness research will lead to “cookbook” medicine or rationing of expensive care, notes a June 2009 letter to the Senate signed by 62 medical associations, including the AAO-HNS. “That is not its purpose,” the letter reads. “Its purpose is to help physicians and patients make smart choices based on the clinical value of varying treatments and interventions, the unique needs and preferences of individual patients and our societal commitment to reduce disparities in care.”
Controversy sparked in November 2009 by changes in U.S. Preventive Services Task Force recommendations regarding screening mammograms offers a cautionary tale for comparative effectiveness research. The task force dropped its recommendation of regular mammograms for women under 50 and stated that, for this group, the decision to begin regular screenings “should be an individual one and take patient context into account.” The shift caused an uproar among conservative political circles and some advocacy groups.
The backlash was “definitely a sobering experience for those of us who have been pushing comparative effectiveness,” said Gail Wilensky, PhD, senior fellow at Project Hope, an international health education organization. “It suggests that timing is important, as is not being tone-deaf to the political environment and to explain why you’re going against conventional wisdom or previous recommendations.”
Another concern is that public and private payers will use the findings to limit coverage and reimbursement for medical options deemed less effective. Under the ARRA, public payers are barred from using the evidence for payment or coverage decisions, Dr. Conway said. In addition, the health reform bill states the findings should not be construed as coverage or reimbursement mandates.
But Dr. Conway offered this caveat: “We’ll never be able to control every private payer. But to take it a step farther and say we shouldn’t even be producing this information is a false argument.” Proponents argue that insurers often make decisions now mainly based on cost. At least comparative effectiveness findings would allow plans to make clinically informed choices, Dr. Conway said.
Dr. Wilensky said insurers should not stop covering or paying for approved interventions. Using the findings to create value-based coverage and reimbursement would be more sensible, she said. “You reimburse more and you have lower co-payments for the stuff that really seems to have a beneficial effect, and the rest you can make more expensive,” she said.
The assumption often is that newer, more expensive treatments will be shown to be less effective than older ones. According to the medical association letter: “While comparative effectiveness research may identify some low-cost treatments that yield better outcomes than high-cost alternatives, the reverse is also true: Comparative effectiveness research analyses might persuade cost-conscious payers, purchasers and patients that an expensive new medical innovation offers better value than current therapies,”
Putting It into Practice
Although agencies are already using the new funding, it might take years for the findings to be completed. However, a major focus is to quickly and clearly disseminate the evidence to practicing physicians, Dr. Conway said, adding that when findings are released, medical associations will have to decide whether to create evidence-based practice guidelines.
Geri Aston is a health policy writer based in Chicago.