Death rates not best judge of hospital quality, researchers say

Published: Wednesday, April 21, 2010 - 04:43 in Health & Medicine

Inpatient mortality rates, used by organizations to issue "report cards" on the quality of individual U.S. hospitals, are a poor gauge of how well hospitals actually perform and should be abandoned in favor of measures that more accurately assess patient harms and the care being provided, argue patient safety experts in a paper out today. Peter Pronovost, M.D., Ph.D., professor of anesthesiology and critical care medicine at the Johns Hopkins University School of Medicine, and Richard Lilford, Ph.D., professor of clinical epidemiology at the University of Birmingham in England, write in the British Medical Journal that hospital mortality rates take into account all inpatient deaths, not just the ones that could have been prevented with quality care. Since many patients are often too sick to be saved by the time they are admitted to the hospital, the researchers argue, hospital mortality rates shouldn't be the factor that determines whether hospitals are "good" or "bad."

Only one of every 20 hospital deaths in the United States is believed to be preventable.

Hospital standardized mortality ratios, which Pronovost and Lilford looked at specifically in their paper, identify hospitals where more patients die than would be expected (bad hospitals) and hospitals where there are fewer deaths than expected (good hospitals). The ratio — used by the governments of the United Kingdom and by some nonprofits, state health departments and individual hospitals in the United States — has been criticized by some for not separating preventable from inevitable deaths largely because they fail to appropriately take into account the mix of patients seen at individual hospitals or variations in care within hospitals themselves.

Hospital mortality seems like the most obvious way to judge a hospital's care. It is easily measured, of undisputed importance to everyone and common to all hospital settings. But it does not tell the whole story, Pronovost says.

"This tool is widely used, probably because it's easy. The attitude is: It's good enough," Pronovost says. "But it's not. It's laudable to want to look at preventing deaths. But if you want to look at preventing deaths, why on earth would you look at all deaths, when it's only a small percentage that fall into that category?"

Pronovost isn't arguing against making hospitals responsible for the quality of care they provide, but just the opposite. He wants to use selected measures that are accurate, that are used to examine events that can be prevented and that have been scientifically studied. He isn't against collecting data on mortality; he just thinks they shouldn't be the sole basis for sanction or reward. In the United Kingdom, the government uses these rates punitively. In the United States, the public may wrongly judge hospitals based on rates used on quality "report cards." The U.S. government does not use the rates as part of its regulatory oversight of hospitals.

"The goal is to say, yes, we need to be more accountable for quality of care, but we need to be scientific in how we separate hospitals of better quality from hospitals of worse quality," he says.

Using mortality rates can mislead the public into thinking a hospital offers poor care when it does not, he says, or to comfort those who score well, who may just have a false sense of confidence since the rates are not meaningful. In the United Kingdom, mortality ratios vary by 60 percent among hospitals, making it an "absurd" measure of quality, Pronovost says, when only one in 20 deaths can be prevented.

One yardstick by which hospitals could be better judged, he says, is the rate of bloodstream infections in hospital intensive care units, which cause 31,000 deaths in U.S. hospitals each year. Pronovost's previous research has found that these infections are largely preventable by hospitals that use a five-step checklist with simple steps proven to reduce the infections. Research on the checklist showed that bloodstream infections at Johns Hopkins Hospital and hospitals throughout the state of Michigan have been virtually eliminated when the checklist is followed.

Looking at some mortality rates may make sense, he says. For example, looking at death rates following a heart attack or elective surgery could be a quality measure since there is an expectation that those patients should survive.

He says more research needs to be done into which measures most accurately assess how hospitals prevent needless deaths. These, he says, should be how hospital quality is judged.

"In using mortality rates, hospitals are applauded because we've saved lives because rates are low or scolded for being above where they should be," Pronovost says. "There's no signal there. There's just a lot of noise."

Source: Johns Hopkins Medical Institutions

Share

Articles on the same topic

Latest Science Newsletter

Get the latest and most popular science news articles of the week in your Inbox! It's free!

Check out our next project, Biology.Net