Holding Hospitals Accountable for Improved Patient Safety: Confidential Reporting of Major Incidents

  • Journal of Medical Regulation
  • March 2008,
  • 94
  • (1)
  • 23-29;
  • DOI: https://doi.org/10.30770/2572-1852-94.1.23

ABSTRACT

Context: Evidence that the quality of medical care is improved by adverse event reporting is growing, but systemic improvement in patient safety remains uncoordinated and is driven by organizations and states, rather than by any unified plan. Massachusetts requires hospitals to report several types of major incidents to two different agencies, one set of reports are available to the public, the other kept strictly confidential. We analyzed the effect of the confidential reporting and feedback system on the number of reports, their type and whether they contained corrective actions, before and after a letter notifying hospitals of their failure to comply with the reporting statutes was sent in August 2003. This notification clearly held hospitals accountable for improved patient safety.

Objective: To assess the changes in the number of reports filed for major incidents and the presence in the reports of corrective actions planned by the hospitals for the year before and two years after the compliance was sent.

Design, Setting and Participants: All 1,317 major incident reports (MIRs) submitted in fiscal years 2003, 2004 and 2005 to the Patient Care Assessment Division (PCA) of the Board of Registration in Medicine of Massachusetts from all 94 acute care hospitals in Massachusetts were analyzed, including the presence of corrective actions included in their report to the PCA. MIRs were grouped by fiscal year, because the compliance letter holding hospitals accountable for improved patient safety was sent in FY 2003; thus improvements were expected to be seen from the baseline period of FY 2003, to the two subsequent fiscal years.

Main Outcome Measures: Changes in number of reports of serious major incident reports and presence of hospitals' corrective actions included in reports.

Results: Immediately following the baseline fiscal year of 2003, there was a large and significant increase in both the number of hospitals submitting at least one report and the mean number of reports submitted per hospital, and both numbers remained at this higher level in the third year. There was a significant increase in reports for fatal/life-threatening incidents as measured by the presence of corrective actions submitted by the hospitals in their reports, but this improvement lagged one year behind the increase in number of reports.

Conclusions: The purpose of a mandatory adverse reporting system, holding hospitals accountable for improved patient safety and quality, is demonstrated by significant improvements in reporting compliance and the presence of corrective actions to the PCA during the three-year period examined.

INTRODUCTION

Reporting of adverse patient care events as a requirement of hospital accreditation has been an essential element of quality improvement for decades.1 The 1999 Institute of Medicine (IOM) report made it clear that a reporting system forms the basis of safe patient practices whether within a hospital or larger health care organization.2 Although the IOM report is widely cited as a ground-breaking idea, other organizations in the fields of business and industry have been reporting serious events and learning from their failures for years. Reporting is the fundamental element of all safety systems and learning organizations. Encouraged by the IOM report, many states adopted statutes for “immediate and strong mandatory reporting”3 of adverse patient events. Today, half of the states have these requirements.4 However, despite these changes, there is little evidence of systematic improvement in patient safety in the U.S. health care system.5

Current reporting measures reflect a short list of processes of care and poor patient outcomes. The measures in use today are applicable only to small groups of patients and fail to answer the question whether care is safer.

Reporting is merely a tool for obtaining patient care information, however, in the end, the recipient of the report is less important than the response received in return. Reporting alone is a non-productive activity for facilities unless it is promptly followed with analysis and feedback. But the information collected, and its disclosure or other uses, is a complex, often emotional and otherwise multi-faceted issue.

Whether mandatory state reporting systems are effective in improving patient outcomes has yet to be determined. Since the time Massachusetts's dual reporting system was established following a malpractice crisis in 1986, the rationale for requiring reporting has radically changed from one of nominal oversight to accountability for patient safety. We found that substantial increases have occurred in reporting by hospitals in the confidential Patient Care Assessment (PCA) system of the Massachusetts Board of Registration in Medicine, and that the number of corrective actions proposed by hospitals has improved.

History:

In 1986, Massachusetts adopted a law establishing a second, confidential reporting system in which hospitals were required to report several classes of serious patient care events.6 In addition to filing reports with the Department of Public Health (DPH), as in the past, hospitals were also required to report incidents to the Board of Registration in Medicine Patient Care Assessment Division (PCA).

The DPH and PCA systems differ in several important respects. The most significant is that reports made to DPH are publicly available information (although in practice the information is not easily accessible), while those made to PCA are subject to strict confidentiality laws,7 are not accessible by the public, are not legally discoverable.8 Hospitals must file reports to DPH when the incident occurs, while PCA reports are filed quarterly to provide time for hospitals to investigate the underlying causes of the event and formulate corrective actions.

It is unknown, whether, as some have suggested, hospitals use the dual reporting agencies to their advantage by selectively reporting less sensitive incidents to the public DPH system, while trusting the legal protections of the PCA for more serious reports, is unknown. Also unclear is whether some hospitals report only the same incidents to both agencies because of time, personnel or financial constraints.

The most critical elements for learning from a reporting system: timely feedback, usable analysis and distribution of lessons learned from adverse events, had, for the most part, been missing from the PCA until 2003, when a new commitment to a functional, successful reporting system began. In 2005, installation of a computerized database allowed systematic collection and analysis of information. More important than data acquisition, communication between the PCA and hospitals improved. The PCA began to reach out to facilities and enter into collaborative dialogues to achieve the goal of better patient safety. Since 2003, PCA's willingness to change from its old mode has encouraged facility responsiveness.

It is too early to assess the effects of the compliance letter from the PCA on direct measures of improved hospital safety. Rather, we assess whether there has been any improvement in increased quantity of reports and corrective actions planned by the hospitals. Increased reporting of major incidents could result from either better compliance with mandated reporting or a real increase in the occurrence of major incidents. Our data does not permit us to distinguish between these two alternatives; however, there is no reason to expect that a stringent effort to hold hospitals responsible for reporting and correcting major incidents would affect the incidence of these events.

We asked four questions: (1) What was the effect of the increased effort on the part of the PCA program staff on the number of hospitals filing reports and on the number of reports filed by each hospital? (2) Was there any effect on the number of corrective actions included by the hospitals in their major incident reports? (3) What kinds of incidents were most commonly reported by the hospitals and (4) Which incidents were reported with the most corrective actions? Data were analyzed separately for fatal incidents and for less serious incidents.

METHODS

Study population and study design: The study sample consisted of all 1,317 major incident reports submitted by 94 acute care hospitals during fiscal years 2003, 2004 and 2005 to the PCA.

Data collection and data cleaning: The PCA health care facility report for each hospital and comments about each medical incident report (MIR) were evaluated. Each MIR was read and determinations were made if essential data elements were present, for instance did the MIR contain a statement about corrective actions being proposed. Data cleaning, including assurance that PCA and hospital data for each major incident report and all classifications possible have been coded was done. Additionally, responses were categorized from both hospital and the board as to whether or not a corrective action was contained in the MIR and a system for classifying effective corrective actions according to type, was developed (Table 3). Responses from both hospital and the PCA were categorized as to whether or not a corrective action was undertaken.

Each major incident report was categorized by: (1) severity: “fatal” or “non-fatal”, (2) whether or not a corrective action was included by the hospital in their report (3) which “Never Event”10 general category the incident represented and if the incident was an exact match for a “Never Event” (see Table 1). Hospitals were categorized by non-profit status (yes or no) and by number of beds (categories were: 15–99 beds, 100–199 beds, 200–299 beds, 300–499 beds and 500 or more beds), to determine whether either were moderating variables.

Table 1.

Categories of “Never Events”

Statistical analysis: Categorical data were described using frequencies and proportions, and inferential statistical comparisons were performed using Fisher's exact test of two-tailed probability. Continuous measures were described by means and standard deviations, and the mean number of reports submitted by each hospital and tested the significance of changes in these repeated measures by multivariate analysis of variance, with post-hoc testing by dependent t-test were computed.

RESULTS

Changes in quantity of reporting major incidents: Confidential reporting to the PCA by acute care hospitals in Massachusetts, increased over the three fiscal years: from 223 reports in the baseline year of 2003 to 544 reports in 2004 (an increase of 144 percent) and to 552 reports in 2005 (an increase of one percent from 2004). Thus, there was a large increase in total number of reports submitted in the year immediately following the new measures to improve compliance, and little change between the second and third years.

Corrective actions in major incident reports: The number of major incident reports in which the hospital included a corrective action to the PCA was assessed.

As shown in Figure 2, there was a significant increase only in the proportion of reports of fatal incidents with corrective actions; the observed increase from 34 percent (45/134) in 2003 to 38 percent (114/299) in 2004 was not statistically significant (Fisher's exact p=.39), while the increase from 38 percent in 2004 to 47 percent in 2005 was statistically significant (Fisher's exact p=.02). There was no significant differences among the three years in the fraction of corrective actions among the non-fatal reports (all p>.50).

Figure 2 illustrates that during the baseline year of 2003, the proportion of reports with corrective actions was higher for non-fatal incidents than for fatal events (57 percent v. 34 percent) (p=.001). However, by fiscal year 2005, the difference was no longer significant (55 percent, 160/292 v. 47 percent, 121/260) (p=.06). However, Figure 2 also shows that despite the significant increase in the proportion of qualifying reports for fatal incidents, still fewer than half of the reports included corrective actions in the final year of 2005.

The column graph in the lower part of Figure 1 indicates that, in addition, the mean number of reports submitted by each reporting hospital increased significantly after the baseline fiscal year of 2003, both for fatal incidents (Wilks' Lambda=.66, F(2,88)=22.1, p<.001) and for non-fatal events (Wilks' Lambda=.73, F(2,82)=15.1, p<.001). For both levels of incident severity, post-hoc paired t-tests found significant increases between 2003 and 2004 (p<.001), and no significant change between 2004 and 2005 (p>.25). The mean number of incidents reported was more than doubled immediately after the baseline fiscal year of 2003, and did not significantly change during the following two years.

Figure 1.

Rate and quantity of reporting of major incidents by 94 Massachusetts acute care hospitals.

Figure 2.

Change in proportion of reports with corrective actions by severity of incident.

There was no association between hospital not-for-profit-status with either number of hospitals reporting or the mean number of reports submitted. Hospital bed size was also found not to be associated with the number of hospitals reporting, but it was significantly associated with the mean number of reports submitted in each year (median Spearman rho coefficient = .44, all p's<. 05).

Thus improvements in quality of reports (recorded as the inclusion of corrective actions) of fatal incidents appeared to lag one year behind increases in the number of these reports (see Figure 1), while corrective actions in reports of non-fatal incidents remained unchanged during the three years.

Categories of events reported: Table 2 indicates the types of “never events” (see examples in Table 1) reported by hospitals, for both fatal and non-fatal incidents.

Table 2.

Frequency, Specificity and Quality of Corrective Actions (CA) in Hospital Reports by Category of Never Events (NE)

Table 3.

9 Frequency, Specificity and Quality of Corrective Actions (CA) in Hospital Reports by Category of Never Events (NE)

The most common categories of events reported were non-fatal surgical or procedural incidents, fatal care management incidents and fatal surgical or procedural incidents. The number of exact matches for “Never Events” was 19 percent. Of those specified events, 2.6 percent were surgeries performed on the wrong body part; 2.2 percent were retention of a foreign object in a patient after surgery; 0.8 percent were patient suicides or attempted suicides while being cared for in a health care facility; 5.7 percent were patient death or serious disability associated with a medication error; 2.2 percent were maternal deaths; and 1.9 percent were patient deaths associated with a fall while in a health care facility. Table 2 also indicates that the events with the highest rates of corrective action inclusion were all types of criminal events (100 percent) and fatal or non-fatal product or device incidents (80 percent and 71 percent respectively).

Discussion: PCA reporting, from its inception, has not enjoyed easy acceptance. This distrust stems from the statute placing oversight of institutional quality improvement in the agency responsible for licensing physicians. Fueled by fears of combining enforcement and quality functions within the medical board, physicians have been reluctant participants until now in this particular version of improving patient safety. Moving from this fear-based argument to a trusting interaction with PCA is an ongoing process. Confidentiality is the lynchpin of trust. Beginning in 2003, with letters and personal visits by board members to hospitals, compliance with regulations has shown improvement. Still, only about two-thirds of eligible acute care facilities report, and it appears to have reached a plateau. Why? There are four possible explanations. Reporting is an expensive, labor intensive activity and some facilities may not have the resources. The result of reporting, the learning from the incident, is minimal, because of feedback content from PCA or long delays in returning reports, or both, and so not worth the effort. There are really no significant consequences for failure to report, in all practicality. Physician buy-in, those who make up hospital quality improvement committees, is lacking.

Our analysis does not shed any light on this crucial reporting failure. But the essential elements of a reporting system, the process and technology, including a standardized format, communication, feedback, analysis, learning, response and dissemination of lessons learned from reported events11 are being examined by PCA.

The fact that the quality and number of corrective actions proposed increased, but lagged a year behind increased reporting seems to have ready explanations. Early reports lacked corrective actions, but with the PCA's response of a health care facility report (HCFR), which required over a year in turn-around, hospitals better understood what PCA was expecting and formatted their next report to satisfy these expectations.

The interesting finding that only about one-fifth of major incidents met strict definitions for “Never Events” is not surprising. After all these are serious, preventable and unambiguous adverse events. The important implication is that the vast majority of adverse events reported will need actions proposed. Actions which will allow health care institutions to evaluate whether they learn from the submitted reports by investigating what happened and why, and devising a plan to reduce the probability that the event will recur and at the same time identify measures of the effectiveness of their interventions.12 PCA's purpose is to facilitate investigation, proposal and monitoring of corrective actions. Measuring PCA's effectiveness in achieving this purpose was not possible with the current dataset.

REFERENCES

  1. 1.
  2. 2.
    Institute of Medicine, To Err is Human: Building a Safer Health Care System ( Washington, D.C.: National Academy Press, 1999).
  3. 3.
    Ibid., p.6.
  4. 4.
    RosenthalJ, BoothM. “Maximizing the Use of State Adverse Event Data to Improve Patient Safety” National Academy for State Health Policy, 2005.
  5. 5.
    LeapeL, BerwickD, “Five years after To Err is Human: What have we learned”. JAMA, 2005, 239: 19, 23842390.
  6. 6.
    243 Code of Massachusetts Regulations. 3.0 et seq.
  7. 7.
  8. 8.
    Massachusetts General Law, c. 111, §205.
  9. 9.
    LeapeLL, SchneiderEC. Personal communication of a corrective action system they devised, February 2006.
  10. 10.
    National Quality Forum. “Serious Reportable Events in Healthcare”. 2002. In April, 2007 an update was released adding “Artificial insemination with the wrong donor sperm or wrong egg” as the 28th “Never Event”.
  11. 11.
    WHO Draft Guidelines for Adverse Event Reporting and Learning Systems, WHO 2005, p.10§.
  12. 12.
    PronovostPJ, MillerMR, WachterRM. “Tracking Progress in Patient Safety: an Elusive Target”. JAMA2006; 296: 6, p. 696699.
Loading
Loading
Loading