Behavioral interventions aimed at reducing inappropriate antibiotic prescribing were largely successful, but effects waned during the year after the interventions were stopped, according to a research letter published online October 10 in JAMA.
Although inappropriate prescribing rates mostly remained lower than in clinics that received no intervention 12 months after the interventions ended, the authors note the possibility that “[p]ersistence of effects might diminish further as more time passes.”
“These findings suggest that institutions exploring behavioral interventions to influence clinician decision making should consider applying them long-term,” Jeffrey A. Linder, MD, from Northwestern University Feinberg School of Medicine in Chicago, Illinois, and colleagues write.
Researchers randomly assigned 47 primary care practices to receive one, two, three or no behavioral interventions for 18 months, starting between November 2011 and October 2012. The practices, located in Boston, Massachusetts, and Los Angeles, California, included 248 clinicians who received education on antibiotic prescribing guidelines. Researchers collected data on the clinicians’ baseline antibiotic prescribing habits 18 months before any interventions began and continued until 18 months after they ended.
In one intervention, the clinicians received monthly emails that showed how their rates of inappropriate antibiotic prescribing compared with those of other clinicians with the lowest rates. The other interventions involved an “accountable justification” prompt in the electronic health record (EHR) system for providers to write why they were prescribing an antibiotic for an acute respiratory infection, and an EHR prompt that offered nonantibiotic alternatives when a provider tried to prescribe antibiotics for an acute respiratory infection.
The researchers compared inappropriate antibiotic prescribing rates among the clinicians for adult patients with nonspecific upper respiratory tract infections, acute bronchitis, and influenza. Accountable justification prompts and the peer comparison emails led to significantly lower rates of inappropriate antibiotic prescribing.
This follow-up study compared how well the effects of each of those interventions persisted over the course of 12 months after the interventions were stopped, adjusting for individual clinicians’ effects, compared with clinics that received no interventions. The researchers also excluded data from five providers who left the study.
The baseline period before interventions began included 14,753 visits for acute respiratory infections in which prescribing antibiotics would have been inappropriate. A total of 16,959 such visits occurred during the intervention and 7489 occurred during the 12 months after the intervention.
Postintervention, inappropriate antibiotic prescribing dropped from 14.2% to 11.8% in practices without interventions, but increased from 7.4% to 8.8% in those that received the EHR intervention recommending alternatives to antibiotics. The difference between these was not statistically significant (3.8%; 95% confidence interval [CI], −10.3% to 17.9%; P = .55).
Inappropriate prescribing increased from 6.1% to 10.2% in those who received EHR prompts for accountable justification, a significant 6.5 percentage point difference from control clinics (95% CI, 4.2% – 8.8%; P < .001). Similarly, inappropriate prescribing increased from 4.8% to 6.3% in those who received the peer comparison intervention, a significant 3.9 percentage point difference from the control practices (95% CI, 1.1% – 6.7%; P < .005).
Nevertheless, practices that had received the peer comparison intervention still had lower inappropriate prescribing rates 12 months after the intervention compared with those of control clinics (P < .001). No significant difference in overall inappropriate prescribing rates existed between control clinics and those with the accountable justification intervention.
The authors surmise that the effects after the peer comparison intervention may have continued longer because the intervention did not rely on EHR prompts, “whose absence might have been quickly noted by clinicians,” they write.
“Peer comparison might also have led clinicians to make judicious prescribing part of their professional self-image,” the authors explain. “Although these findings differ from a prior antibiotic-prescribing feedback intervention that did not have persistent effects, peer comparison–induced improvements have been durable in other nonmedical domains.”
The research was funded by the National Institutes of Health and National Institute on Aging and Agency for Healthcare Research and Quality, with additional support for data collection from the Patient-Centered Outcomes Research Institute. One coauthor reports receiving grant funds from Pfizer and personal fees from Omron Healthcare, and one coauthor reports receiving consulting fees from Precision Health Economics. The SHEA Antimicrobial Stewardship Research Workshop Planning Committee for which Dr Linder received an honorarium was funded by Merck.
JAMA. Published online October 10, 2017. Abstract
For more news, join us on Facebook and Twitter
Tidak ada komentar:
Posting Komentar