Annotated Bibliography

Vitez TS. A model for quality assurance in anesthesiology. J Clin Anesth. 1990;2(4):280-287.
Describes 4 years experience with a city-wide program for anesthesia event-monitoring and error analysis that looked at both system process and individual performance. Audited charts to assess the adequacy of the event reporting mechanism.
Revicki DA, Klaucke DN, Brown RE, Caplan RA. Reliability of ratings of anesthesia’s contribution to adverse surgical outcomes. QRB Qual Rev Bull. 1990 Nov;16(11):404-8.
Found good inter-rater reliability (kappa 0.6-0.7) for 3 different scales for assessing the contribution of anesthesia to adverse surgical outcomes in relation to patient and surgeon factors.
The interrater reliability of physician ratings of anesthesia contribution to adverse outcomes was evaluated. A physician panel reviewed hospital records, anesthesia records, standard data collection forms, and, when available, autopsy reports for 28 patients experiencing severe morbidity or death within 48 hours following anesthesia for surgery. Consensus among reviewers about the contribution of anesthesia to adverse outcomes ranged from 82.1% to 92.9%. Kappa coefficients indicated excellent interrater reliability for the Edwards Scale and rating scale, and good interrater reliability for the percent scale.
Lagasse RS, Steinberg ES, Katz RI, Saubermann AJ. Defining quality of perioperative care by statistical process control of adverse outcomes. Anesthiology.1995;82(5):1181-1188.
Modified the Vitez approach to develop a balanced program of event monitoring and analysis at a major teaching hospital emphasizing Deming’s approach to quality control. Achieved a high rate of self-reporting.
BACKGROUND: Through peer review, we separated the contributions of system error and human (anesthesiologist) error to adverse perioperative outcomes. In addition, we monitored the quality of our perioperative care by statistically defining a predictable rate of adverse outcome dependent on the system in which practice occurs and respondent to any special causes for variation.
METHODS: Traditional methods of identifying human errors using peer review were expanded to allow identification of system errors in cases involving one or more of the anesthesia clinical indicators recommended in 1992 by the Joint Commission on Accreditation of Healthcare Organizations. Outcome data also were subjected to statistical process control analysis, an industrial method that uses control charts to monitor product quality and variation.
RESULTS: Of 13,389 anesthetics, 110 involved one or more clinical indicators of the Joint Commission on Accreditation of Healthcare Organizations. Peer review revealed that 6 of 110 cases involved two separate errors. Of these 116 errors, 9 (7.8%) were human errors and 107 (92.2%) were system errors. Attribute control charts demonstrated all indicators, excepting one (fulminant pulmonary edema), to be in statistical control.
CONCLUSIONS: The major determinant of our patient care quality is the system through which services are delivered and not the individual anesthesia care provider. Outcome of anesthesia services and perioperative care is in statistical control and therefore stable. A stable system has a measurable, communicable capability that allows description and prediction of the quality of care we provide on a monthly basis.
Levine RD, Sugarman M, Schiller W, Weinshel S, Lenning EJ, Lagasse RS. The effect of group discussion on interrater reliability of structured peer review. Anesthiology. 1998;89(2):507-515.
An interesting study that not only assesses the effect of discussion on otherwise independent ratings, but also compares use of 2 different models for structured review: Vitez and Legasse. Following group presentation of a case abstract, 5 reviewers independently rated the case. This was followed by a period of discussion and review of original medical records culminating in independent re-rating. There was marked improvement in the level of inter-rater reliability following discussion as measured by Sav a kappa-like statistic.
Sanborn KV, Castro J, Kuroda M, Thys DM. Detection of Intraoperative Incidents by Electronic Scanning of Computerized Anesthesia Records: Comparison with Voluntary Reporting. Anethesiology. 1996;85(5):977-987.
An important reference for the value of electronic recording of physiological variables as a means of identifying adverse events related to anesthesia. Events were strongly associated with mortality risk. Subsequent correspondence with Lagasse dealt with issue of reliability of hand-kept vs. automated records.
Background: The use of a computerized anesthesia information management system provides an opportunity to scan case records electronically for deviations from specific limits for physiologic variables. Anesthesia department policy may define such deviations as intraoperative incidents and may require anesthesiologists to report their occurrence. The actual incidence of such events is not known. Neither is the level of compliance with voluntary reporting.
Methods: Using automated anesthesia record-keeping with long-term storage, physiologic data were recorded every 15 s from 5,454 patients undergoing noncardiothoracic surgery. Recorded measurements of blood pressure, heart rate, arterial oxygen saturation, and temperature were electronically analyzed for deviations from defined limits. The computer system also was used by anesthesiologists to report voluntarily those deviations as intraoperative incidents. For each electronically detected incident: 1) the complete automated anesthesia record was examined by two senior anesthesiologists who, by consensus, eliminated case records with artifact or in which context suggested that the incident was not clinically relevant, and 2) the anesthesia information management system database was checked for voluntary reporting.
Results: In 473 automated anesthesia records, 494 incidents were found by electronic scanning of 5,454 automated anesthesia records. Sixty intraoperative incidents were eliminated, 25 due to artifact and 35 due to context. When the remaining 434 intraoperative incidents were checked for voluntary reporting, 18 (4.1%) matching voluntary reports were found. All intraoperative incidents that were reported voluntarily also were detected by electronic scanning. Based on a 10% sample, the sensitivity rate of electronic scanning was 97.2% (35/36), and the specificity rate was 98.4% (427/434). Among 413 cases with electronically detected intraoperative incidents, there were 29 deaths (7.0%), whereas there were only 79 deaths (1.6%) among 5,041 cases without incidents (chi squared = 58.5, P < 0.001).
Conclusions: The use of an anesthesia information management system facilitated analysis of intraoperative physiologic data and identified certain intraoperative incidents with high sensitivity and specificity. A low level of compliance with voluntary reporting of defined intraoperative incidents was found for all anesthesiologists studied. Finally, there was a strong association between intraoperative incidents and in-hospital mortality.
Edbril S, Lagasse RS. Relationship between malpractice litigation and human errors. Anesthesiology. 1999 Sep;91(3):848-55.
Present comprehensive data on their program from 1992-1994 with control charts. Identified a disconnect between successful malpractice claims and injury due to human error vs. system error.
Lagasse RS: Anesthesia safety: Model or myth? A review of the published literature and analysis of current original data. Anesthesiology 2002;97(6):1609-17.
A good analysis of the many factors to be considered in judging whether anesthesia-related mortality has changed over time. In particular, examines the varied role of peer review in judging causation in any given case. Incorrectly cites Wilson DS et al. as having shown that discussion improves reliability of judgments of preventability. Subsequent correspondence addresses the classification of deaths attributable to limitations of supervision of residents. See related editorial.
Katz RI, Lagasse RS. Factors Influencing the Reporting of Adverse Perioperative Outcomes to a Quality Management Program. Anesth Analg. 2000;90:344–350.
A landmark article testifying to the value of creating an environment that supports self-reporting of adverse events. See my whitepaper on Self-Reporting for a more detailed discussion.
Quality management programs have used several data reporting sources to identify adverse perioperative outcomes. We compared reporting sources and identified factors that might improve data capture. Adverse perioperative outcomes between January 1, 1992, and December 31, 1994, were reported to the Department of Anesthesiology Quality Management program by anesthesiologists, hospital chart reviewers, and other hospital personnel using incident reports. The reports were compared for preoperative health status, severity of outcome, and associated human error. Subsequently, personnel representing the various sources were surveyed regarding factors that might affect their reporting of adverse outcomes. Of 37,924 anesthetics, 734 (1. 9%) adverse outcomes were reported, 519 (71%) of which were identified by anesthesiologists, 282 (38%) by chart reviewers, and 67 (9.1%) by incident report. There was no statistically significant difference in reporting rates by anesthesiologists according to preexisting disease, severity of outcome, or presence of human error. Thirteen cases involving human error, however, resulted in disabling patient injury, with a higher rate of self-reporting for these cases (92%, P < 0.05). Rates of reporting by chart reviewers varied (P < 0.05) according to severity of patient illness and severity of outcome. Incident reports identified only 67 adverse outcomes (9.1%), but included a significantly higher percentage of the adverse outcomes involving human error (23.3%, P < 0.05). Twenty attending anesthesiologists, 15 resident anesthesiologists, 29 operating room nurses, 19 postanesthesia care unit nurses, and 6 hospital chart reviewers responded to the survey. Only the potential to improve quality of patient care influenced or strongly influenced a decision by all groups to report an adverse outcome to a peer review process. Physician self-reporting is a more reliable method of identifying adverse outcomes than either medical chart review or incident reporting.
IMPLICATIONS: Physician self-reporting is a more reliable method of identifying adverse outcomes than either medical chart review or incident reporting. Reporting by chart reviewers is biased both by the severity of outcome and severity of patient illness, whereas incident reports tend to focus on human error. All groups feel compelled to report adverse outcomes when the data may result in improved patient care.