Thursday, June 03, 2010

The JAMA heart failure outcomes study---what does it mean?

This paper, widely distorted in media reports the last few days, deserves the status of “landmark.” The results on the outcome trends were mixed:

Between 1993 and 2006, mean length of stay decreased from 8.81 days (95% confidence interval [CI], 8.79-8.83 days) to 6.33 days (95% CI, 6.32-6.34 days). In-hospital mortality decreased from 8.5% (95% CI, 8.4%-8.6%) in 1993 to 4.3% (95% CI, 4.2%-4.4%) in 2006, whereas 30-day mortality decreased from 12.8% (95% CI, 12.8%-12.9%) to 10.7% (95% CI, 10.7%-10.8%). Discharges to home or under home care service decreased from 74.0% to 66.9% and discharges to skilled nursing facilities increased from 13.0% to 19.9%. Thirty-day readmission rates increased from 17.2% (95% CI, 17.1%-17.3%) to 20.1% (95% CI, 20.0%-20.2%; all P less than .001). Consistent with the unadjusted analyses, the 2005-2006 risk-adjusted 30-day mortality risk ratio was 0.92 (95% CI, 0.91-0.93) compared with 1993-1994, and the 30-day readmission risk ratio was 1.11 (95% CI, 1.10-1.11).


What wasn’t noted in the abstract was a finding from the body of the paper (my italics):

The unadjusted 30-day mortality rate decreased by 2.1% (from 12.8% [95% CI, 12.8%-12.9%] in 1993 to 10.7% [95% CI, 10.7%-10.8%] in 2006, a 16.4% relative reduction, P less than .001). In contrast, postdischarge mortality (from discharge to the 30th day after admission) increased by 2.1% (from 4.3% in 1993 to 6.4% in 2006, a 49% relative increase, P less than .001).


That distinction was not made in the popular reports. The overall 30-day mortality reflects what happened to patients in the hospital and post hospital. The post discharge mortality is also a 30 day number, but reflects only what happened to the patient after discharge.

In order to understand what these results mean we need to look at the history of external events over the timeline of the study period, 1993 to 2006. First, the study began early in the DRG era. It predated and spanned the hospitalist movement from inception through maturity. Finally, it spanned the early years of the performance measure era, and heart failure performance measures were entrenched well before the end of the study period. The data trends are most meaningfully viewed in the context of these external events. Speaking of external events the paper’s use of the term “fee for service” was confusing, and as a result the popular media reports misinterpreted the concept. The patients in this registry were fee for service Medicare beneficiaries as opposed to being enrolled in one of the Medicare managed care plans. Hospital reimbursement itself was anything but fee for service as it has been since the enactment of DRGs in 1984. In fact, in-patient services for Medicare patients have been bundled since that time, a fact which few in today’s health care reform debate seem to get.

With that background in mind lets take a look at the trends demonstrated in figure 1 of the paper, starting with the declining length of stay and the rising post discharge mortality. Those curves were the steepest over the first three years of the study, from 1993 to 1996. That may reflects the early effects of the perverse negative cost incentives of DRG, after which the curves flattened as hospitals began to run out of ways to cut corners. According to research data in the early years under DRG reimbursement hospitals, scrambling to survive, were discharging patients “quicker and sicker.” It would be interesting to see the curves from 1984, the year of inception of DRGs, to 1993. I suspect the trends would have been even more dramatic.

This is a good place to digress and recall my own direct observations about how the early days of DRGs played out in the state of Arkansas. Our state PRO (the agency under contract with Medicare tasked with monitoring coding, utilization and quality, and administering sanctions) quickly realized that the new negative incentives were a threat to patient safety. To offset this threat they implemented various draconian rules including some of the very measures now under consideration in the new health care reform package, including no pay for early readmissions. The countermeasures failed miserably and were abandoned after a few years. (Health care policy wonks take note).

Was there any effect of the hospitalist model? For that we need to look at the outcomes hospitalists might be in a position to influence, which are length of stay and in hospital mortality. Although both measures went down neither can be attributed to the hospitalist model according to these data. The hospitalist model came into being around 1996, picked up speed after that and was going full tilt by 2006. Yet, there were no changes in the rates of decline at those time points. In fact, the most precipitous declines occurred before the hospitalist model even existed.

The same can be said for heart failure performance measures: no apparent effect.

No comments: