In my last column, in discussing the importance of learning from defects, I pointed out the problem with identifying adverse events, near misses and hazardous conditions. This is not a small problem. Typically, only about 10% of adverse events are reported. This means that either much effort must be expended to identify such cases by other means or that many learning opportunities will be missed.

Remember that peer review is the dominant mode of event analysis in hospitals and generic screens are the dominant method by which cases are identified for peer review. These include hospital readmission, death, unplanned return to the OR, unplanned transfer to critical care, etc. Generic screens were initially developed to identify instances of patient harm to test whether a no-fault medical malpractice system might be viable. (1) They have low specificity and have never been validated for use in peer review. They were, however, used in the Harvard Medical Practice Study to identify rates of harm and substandard care.(2) The IHI Trigger Tool is an updated version of this method.

In the Harvard study, 26% of all admissions fell out on the screens. The study’s staged review process ultimately led the investigators to declare that 3.7% of admissions were associated with patient harm and 1% with “negligence”, i.e., substandard care. In other words, they had to look at 26 records to find about 4 instances of harm and 1 instance of substandard care. This is why a large proportion of hospitals do secondary pre-review screening before assigning cases for peer review. None of this is getting us to the goal of the QI Model: to identify and act on learning opportunities to improve the quality and safety of care. To be quite frank, as a means of identifying cases for peer review, the generic screen process stinks.

About 20 years ago, the aviation industry woke up to the problem of hazard identification and recognized that fear of reporting was poisoning efforts to improve safety. This resulted in the birth of aviation safety programs that granted immunity from sanctions to pilots who made good faith safety reports. Together with the introduction of crew resource management training, the aviation safety programs were key to the dramatic progress that followed. At least one study suggests that a non-punitive environment would be critical to the ability to replicate this in healthcare.(3)

There is only one published example of a successful self-reporting program in a hospital.(4) This came from a department of anesthesia at an academic medical center, which was able to sustain high rates of self-reporting (90% of cases reviewed, 70% of events identifiable by all means) over several years. The authors assert that “Anesthesiologists will comply with a system of self-reporting if they understand the process, if there is institutional and departmental encouragement and support for the process, and if the process is non-punitive and can result in real improvements in patient care.”

My latest national study of peer review practices (under review for publication) found that self-reporting is beginning to be promoted more broadly. Moreover, hospitals in which the practice is taking hold are realizing the expected improvement in quality and safety.

Coming Next: How to Promote Self-Reporting

Marc's signature

Marc T. Edwards, MD, MBA

President & CEO

QA to QI

An AHRQ Listed Patient Safety Organization


  1. Sanazaro PJ, Mills DH. A critique of the use of generic screening in quality assessment. JAMA. 1991;265(15):1977-1981.
  2. Brennan TA, Leape LL, Laird NM et al. Incidence of adverse events and negligence in hospitalized patients: results of the Harvard Medical Practice Study I. NEJM. 1991;324(6):370-376.
  3. Harper ML, Helmreich RL. Identifying Barriers to the Success of a Reporting System. Advances in Patient Safety, Vol 3. 2005 AHRQ. Rockville, MD.
  4. Katz RI, Lagasse RS. Factors influencing the reporting of adverse perioperative outcomes to a quality management program. Anesth Analg. 2000;90:344–350.