My research has shown that roughly 20% of hospitals make significant changes in their clinical peer review program’s structure, process or governance every year. Unfortunately, this high rate of change has not resulted in substantial improvement. Much of the recent change has concentrated on the replication of a multi-specialty review process, with neglect of factors central to the evidenced-based QI Model. You can refresh your understanding of the QI Model through a brief online program self-evaluation questionnaire at https://qatoqi.com/php/set.php.
In fact, multi-specialty review has become something of a fad. Although it has successfully challenged the claim that only like specialists can be considered peers, there is no published literature validating the concept. While there may be merit to this design, particularly in terms of reviewer participation, standardization and the ability to address clinician to clinician issues, multi-specialty review is not of itself sufficient to close the gap in program performance and eliminate the “Blame Game.”
My most recent project, the Longitudinal Clinical Peer Review Effectiveness Study, sought to clarify the value of multi-specialty review. The full “peer-reviewed” report will be published later this year in the Journal of Healthcare Management. It turns out that it’s the discussion that is most important. The composition of the committee is secondary. No benefit of multi-specialty review could be demonstrated independent of the other factors in the QI Model. Many programs still do not hold committee discussions prior to making final peer review decisions. That’s a mistake.
Back in 2007, when I launched my first national study, there was a comparable consultant-driven trend for centralization of the review process that was of more dubious value for improving the quality and safety of care. It turns out that there is a significant downside to centralized review in a larger hospital, namely lower case review volume.
In the 2007 study, case review volume below 1% of hospital admissions was associated with lower perceived effectiveness. Programs with centralized or partially-centralized activity were less likely to report review volume over 1% than those with de-centralized review activity, with an odds ratio [95% CI] of 0.46 [0.24-0.87]. With rates of preventable harm running about 6%, low review volume means that fewer opportunities for improvement are being addressed. Centralized or partially centralized review activity is associated with greater perceived quality when controlling for case volume, but not when also controlling for the degree of process standardization.
I believe the explanation for this is relatively simple: many hands make for light work. A typical review committee only meets for 1-2 hours a month. It is not possible to conduct a meaningful discussion on more than 10 or so cases in that interval. So, on average, one committee meeting 10 times a year might be expected to review roughly 100 cases. Thus, if targeting a 2% review rate, any hospital having at least 100 staffed beds needs either multiple peer review committees or an effective strategy to manage case volume.
Marc T. Edwards, MD, MBA
President & CEO
QA to QI
An AHRQ Listed Patient Safety Organization