Few outcome measures being used or considered for federal accountability programs are appropriate and valid for that use, according to a study published online August 15 in the Annals of Internal Medicine. Of 10 measures evaluated using four key criteria, only three fulfilled all criteria, and half of the measures met one or no criteria.
“During the past few years, federal public reporting and payment programs have focused less on measuring processes and more on measuring outcomes, such as readmission, health care-associated infections, and mortality,” write David W. Baker, MD, MPH, executive vice president of The Joint Commission, and Mark R. Chassin, MD, MPH, president and chief executive officer of The Joint Commission in Oakbrook Terrace, Illinois. “[O]utcome measures must be chosen carefully to ensure that the outcomes can be influenced by providers and that differences in outcomes are attributable to disparities in the care provided rather than the result of variations in the populations of patients seen.”
Distinguishing the signal from the noise and bias is challenging, they write. Dr Baker and Dr Chassin discuss four criteria recommended to ensure outcome measures are appropriate for accountability assessment and quality control evaluations.
Having adopted these criteria, the Joint Commission will only use outcome measures in accreditation, certification, or public reporting programs if the measure meets all four of the criteria:
-
“Strong evidence should exist that good medical care leads to improvement in the outcome within the time period for the measure.”
-
“The outcome should be measurable with a high degree of precision.”
-
“The risk-adjustment methodology should include and accurately measure the risk factors most strongly associated with the outcome.”
-
“Implementation of the outcome measure must have little chance of causing adverse consequences.”
In addition to these criteria, a focus on attribution is necessary, although often neglected, in deliberating accountability measures, write Helen Burstin, MD, MPH, and Shantanu Agrawal, MD, from the National Quality Forum, and Amir Qaseem, MD, of the American College of Physicians, in an accompanying editorial.
“Aspirational measures that push health care providers beyond their usual sphere of influence should be fully vetted across clinicians, providers, payers, and purchasers,” the editorial authors write. “We should prioritize outcome-focused measures that reflect care spanning clinicians, settings, and time.”
They add that some evidence-based strategies for choosing measures “can and should consider the social determinants of health that dramatically influence health outcomes.”
Dr Baker and Dr Chassin applied their four criteria to 10 outcome measures used in (or proposed for) hospital accountability programs.
Of those measure, two met all four criteria were 30-day mortality for coronary artery bypass graft surgery and surgical site infection within 30 days (National Surgical Quality Improvement Project). A third measure, patient surveys of change in physical function and pain 12 months after joint replacement surgery, meets all four criteria, but only if the response rate is high; with a low response rate, it would not meet the precise measurement criteria.
In addition, measuring new central line-associated bloodstream infections (CLABSI; National Healthcare Safety Network) meets all the criteria; however, concerns about hospital adherence to National Healthcare Safety Network protocols calls into question the accuracy of reporting. Dr Baker and Dr Chassin also note that “the adequacy of the claims-based risk-adjustment methodology is unclear” for CLABSI reporting.
“Judging whether an outcome measure is adequate often is more subjective and nuanced than evaluating process measures,” the authors write. “Some measures may require hybrid methods in which some data are collected electronically and some by chart abstraction.”
The authors note that 30-day mortality measures for COPD, heart failure, stroke, and pneumonia meet only one or two of the criteria. In addition, a high likelihood of surveillance bias ruled out tracking venous thromboembolism during hospitalization as a valid measure, especially considering that “institutions with higher rates of prophylaxis have higher VTE rates,” they write.
The findings suggest it is necessary to reassess what outcome measures are being used and whether they should continue, the authors write. They proposed that many, perhaps most, of the other 30 or so outcome measures presently in use would also fail to meet all four criteria for similar reasons that those assessed failed.
Although one of the two patient-reported outcome measures (PROMs) the authors looked met all four criteria, they note that PROMs “are challenging to assess because of low response rates and lack of a clear cutoff for what is considered an adequate response rate,” they write. “The response rate alone does not assure that performance ratings are unbiased, and measures with low rates may be unbiased.” They recommend empirical evaluations of individual PROMs to assess their validity and reliability as an outcome measure.
In their editorial, Dr Burstin and associates note that a “solid foundation” already exists on which standards and methodologies for PROMs can be built.
“The [National Quality Forum’s] seminal work in this area is driving the field forward to a better understanding of how best to structure patient-reported outcome measures, capture meaningful information for patients as well as providers, and lay a solid foundation for the use of these measures for accountability,” they write. “Given the critical importance of these measures, we need to rapidly explore and adapt to novel methods to capture the patient voice, including the use of computer-adapted technology.”
Perhaps the most difficult job in assessing outcome measures, Dr Baker and Dr Chassin suggest, is determining what risk-adjustment methods are appropriate.
“We believe that the gold standard for assessing a risk-adjustment methodology is to compare the risk factors in the model with the true prognostic factors for the outcome that have been identified in detailed clinical epidemiology studies,” they write.
The authors also noted the impossibility of predicting unintended consequences with certainty but said that measure developers, policymakers, and stakeholders should collaborate to identify plausible ones.
The editorial authors agreed, calling for “[f]eedback from the front line…to assess whether measures are driving improvement without unintended harm to patients.”
The authors have disclosed no relevant financial relationships. Dr Burstin reports contractual support from the Centers for Medicare & Medicaid Services. The other editorialists have disclosed no relevant financial relationships.
Ann Intern Med. Published online August 8, 2017. Article abstract, Editorial extract
For more news, join us on Facebook and Twitter
Tidak ada komentar:
Posting Komentar