The topic I will speak about is the accountability for medical errors and the poorly performing physician. What should be the relationship between medical licensure and medical errors? Let me begin with a brief description of the Joint Commission on Accreditation of Healthcare Organizations (JCAHO). I notice from the Federation seal that the Federation was formed in 1912. The JCAHO is almost as old, going back to 1918 when the American College of Surgeons founded the Hospital Standardization Program. When this program became too large for a single organization to run, the JCAHO was formed in 1951. The JCAHO is a private, not-for-profit organization whose mission is to improve the safety and quality of care. It is currently governed by 28 people: six are public members and the rest are health care professionals who are chosen by the American College of Physicians - American Society of Internal Medicine (ASIM), American College of Surgeons (ACS), American Dental Association (ADA), American Hospital Association (AHA) and the American Medical Association (AMA). We are in a private sector/public sector partnership with federal and state regulatory agencies, including the state medical boards and hospital licensure agencies. We are able to be in such a partnership because we share a common goal: to improve the safety and quality of care. If we did not share this goal, we would not be able to share in the oversight of the quality of care. The JCAHO focuses on evaluating health care organizations, which is a bit different than the themes of what you have been hearing so far with respect to evaluating individual practitioners – the focus of the licensure boards.
Beginning in 1995, the JCAHO noticed there were some very serious adverse events occurring in accredited hospitals, such as cutting off the wrong foot or a patient death from an overdose of chemotherapy. So the question we faced was, “Why was this happening in good organizations with good clinicians?” Of course, this issue really came to the public’s attention with the publication of the Institute of Medicine (IOM) report, To Err is Human, Building a Safer Healthcare System. In that report, based on existing studies, the IOM estimated that 44,000 to 98,000 patients die per year from medical errors. Even those who question some of the statistics agree there are probably at least 44,000 people who die from preventable medical errors each year. One such death is one too many; clearly 44,000 makes this a problem that health care practitioners and organizations better be doing something about on a systematic basis.
So one of the things that the JCAHO did as we struggled with this question was to ask, “What is it that goes on in health care that would lead to these kinds of risks?” We started to talk to other types of high-risk organizations and experts who looked at these high-risk settings. We talked with human factor psychologists and with engineers to identify the characteristics of any kind of endeavor that turns out to have high risk for errors. Those characteristics include having multiple steps and processes. The more steps there are in the process, the more likelihood one of them will go wrong. Other characteristics of high-risk organizations include complex processes in which there are interactive steps and choice points; time-compressed processes in which you are trying to do everything in a very short time span; tightly coupled processes, in which once step A is taken, you do not have a choice except to go immediately to step B; and finally, anything that involves human involvement because we know, as the IOM stated, “to err is human.”
When a serious error occurs, we search for the cause, and most of the time we immediately find the cause is human error. The human error may result from a variety of factors: lack of knowledge and skill, forgetfulness, lack of attention (such as that which occurs from lack of sleep), poor motivation or carelessness and negligence or recklessness. The first two of these sources of errors are factors that are built into all of us. Our human hardware and software are programmed to make errors. One of the things that currently makes us superior to computers is the degree of parallel processing and fuzzy logic our brains use to figure things out, which permits us to rapidly adapt to changes in our environment in the face of incomplete information. But if you use fuzzy logic, you have to recognize that from time to time the decision that is made or the action that is taken will turn out not to be the right decision or action. It will be, in retrospect, an error. So we are actually programmed to make errors, because to eliminate all errors would mean we would need to be programmed in such a way that we would be unable to be rapidly adaptable beings. Consequently, we have to assume that error is part of being human.
So our challenge is that when we examine adverse outcomes from mishaps, we often do find that the immediate cause is what a doctor, nurse or pharmacist did (or perhaps what all three of them did). Our temptation has been to stop at that point because we found the “cause.” Why do we stop? First, because built into all of us is a desire to say that someone has to be responsible, and we have found someone (or several persons) who are responsible.
The second reason we stop at people is because those of us in health care are steeped in the ethical imperative: First, do no harm. That is what we want to do before we even start treating patients. We want to do no harm, and that is so hardwired into us that it is very easy to say, “Ah, there’s the person who violated this ethical imperative and who caused the harm.” If you stop at this usual immediate cause — the person — what is the usual solution? The usual solution is to first blame whoever is responsible and say, “You did it.” Certainly, the responsible person feels badly and may be disciplined. Second, we retrain that particular individual so he or she will not make that error again, saying, “You prescribed these two drugs at the same time. Do you not realize there is a potential adverse interaction? Do not do that again.”
What are the results of this usual solution? The first is that one person, the one who has made the error (assuming the error was based on forgetfulness or lack of skills or knowledge), will not make that error again. I happen to be a psychiatrist. I certainly would not repeat the error of prescribing a medication that would result in an adverse drug interaction after a patient has been harmed, somebody has shamed me about it and I have gone through retraining. The problem is that the retraining process has only helped me. My colleagues in psychiatry have not gone through the same retraining. They may not be able to remember, or may not even be aware of, every potential drug interaction. The consequence is that the error persists. One person – me – has improved. However, the rest of my colleagues are left with the same human frailty and the same liability to the same error. Worse than that, because of the shame I experienced related to this error, everyone else realizes they do not want to be in the same position. So, from now on both my colleagues and I are not going to want to tell anyone that errors have been committed. We are going to try to hide them. In fact, errors will even be hidden from other people within the organization – not just from the public or the state licensing board, but even from our colleagues. As a result, potentially another 44,000 people will die; but we do not know they are dying from mishaps because collectively we do not know that the errors occurred.
People who have studied errors indicate our problem has been — not just in health care, but in general — stopping at that level of the individual, the proximate cause. What we really need to do is go to the root causes. The root causes are the systems and processes in which people work. These are systems and processes that sometimes can force errors. If two potentially dangerous drugs next to each other in an emergency cart have similar names and similar packaging but different uses, I suggest that is more than just enabling an error. It is setting up someone to commit an error. If handwritten orders cannot be read, but the pharmacist does not realize he or she is reading it wrong, an error is enabled. If we install a system that helps the physician realize when he or she has prescribed two drugs that have an adverse interaction, as with an automated order-entry system linked to a decision support system, we will help prevent these errors. In designing and evaluating systems in which we work, we have to remove the systems that force errors, we have to fix the ones that enable errors and we have to put in place systems that will prevent errors – or protect patients from the effect of errors we still make.
You have been talking so far today about evaluating individual performance and about continuing medical education. What is it we need to do differently to help clinicians avoid errors? Often, they result from a breakdown in the communication or information systems. Key information simply has not gotten to a person that needs to have it. Oftentimes they result from the unpredictability of the processes we work within. If we standardize procedures, staff — for example, nurses and surgeons in the operating room — would be more likely to consistently do the “right thing.”
When people have looked at safety, they often refer to a diagram (see Figure 1) in which they refer to an action occurring at the “sharp end.” The action could be taking care of patients in a hospital or clinic. It could be serving clients in a bank. People at the sharp end are working in a larger context of the organization’s systems and processes – which are called the “blunt end.” People who have studied high-reliability organizations to find out how they maintain safety within those settings have found a couple of things.
High Reliability Organizations
First, the high-reliability organization tries to stabilize the systems in the blunt end as much as possible. Second, it attempts to build into the blunt end systems that will prevent errors (like the automated order-entry system I described earlier). Third, it attempts to create a monitoring system that will pick up the first sign that something is going wrong so that intervention can reduce the risk or prevent harm. At the sharp end where people are, researchers talk about people “creating safety.” In other words, there will always be variations and surprises that occur in the client or the patient. The person at the sharp end creates safety for the client or the patient at the intersection between the standardization of the blunt end and the variation at the sharp end.
I have added health care to the list of high-risk/high-reliability enterprises because we realize health care is a high-risk endeavor. I believe it can also become a high-reliability endeavor. What we have at the blunt end are the systems and processes in the health care system. They may be the systems and processes in place in a hospital or in an individual physician’s office. They may be the way pharmaceuticals are packaged and labeled and named, or in the way medical equipment operates. I am reminded of one example related to equipment from the airline industry, which is a high-risk/high-reliability industry. As they started to look at why a particular type of plane was crashing, they discovered it was related to a human factor — the fact that the pupils of our eyes open when we are frightened, and as a result, our depth of focus is reduced. The dials on this particular plane’s dashboard were perfectly readable as long as the pilot was calm. However, as soon as the pilot panicked, perhaps when an engine went out, the dials were too small to keep in focus. So the engineers concluded that the dials needed to be bigger so they could be read when somebody was scared. That change reduced the number of crashes occurring with that particular airplane. That was an example of human factor research that led to a change in the design of equipment.
So, the clinician actually creates safety around the variation that occurs in the patient. What does that mean about the clinician’s relationship to the health care organization? If I am going to be responsible for creating safety for my patients in response to their variation, I would like to have as firm a foundation as possible. I would like to have as little variation as possible in the blunt end. Turn the blunt end/sharp end triangle upside down. I would like to have as little variation as possible in those systems and processes that I have planted my feet on (the blunt end) so that my foundation is stable while I am trying to make adjustments for my patient. If both the foundation (the blunt end) and my interaction with the patient (the sharp end) are in constant flux, I will have a much harder time trying to create safety for my patient. What does this mean in terms of how we reduce errors? It means that we need to be able to shift from always thinking about the proximal cause (the person who we name/blame/shame) to thinking about a root-cause system analysis approach. When an adverse event occurs, we ask, “What have we learned by looking at the systems the person works in? How can we change those systems to reduce these errors?”
Retrospectively, that means when an error occurs, we need a safe environment for reporting, one in which the person does not feel if he or she has observed or committed an error, reporting it is likely to result in something bad happening to the individual. Because of the punitive environment that exists within health care organizations, the organizations themselves often do not learn about the errors. If an environment is created that facilitates reporting, then people can be expected to report errors internally in the organization. When an organization has discovered one of these errors – the Joint Commission calls them sentinel events – that has resulted in death or serious harm to a patient, the organization should conduct a root-cause analysis. The organization must dig down into its systems and processes, and discover what can be changed so that error will not be repeated; not just by this clinician, but by other clinicians for whom it is also human to err. Then the organization must make those changes. Finally, it makes no sense for one organization or, I would suggest, for one doctor, to figure out how to avoid a particular error and then keep that knowledge to himself. If our ethical obligation is to first do no harm, I would suggest that means we need to share that information with others. Health care organizations might compete over efficiency or better outcomes, but I do not think they should ever compete over how to protect patients from avoidable harm. That is information that should be readily shared. That is why, for example, the JCAHO encourages organizations that have a sentinel event and do a root-cause analysis, to share this information with us. The JCAHO keeps the information confidential. After making it anonymous, we put the information into a database we have used to produce a notice called Sentinel Event Alert to all accredited organizations that says, “Look, this is a bad event that can happen. Here is what a number of organizations found in common when they did root-cause analysis of these events, and here are changes, a number of these organizations concluded, that would help avoid a recurrence of this event. The JCAHO suggests you consider these changes and, in fact, they do. One of the first Sentinel Event Alerts we published was about undiluted KCl on in-patient units. All the hospitals that had such a death resulting from the infusion of undiluted KCl and reported it to us also reported that in their root-cause analysis they identified a common root cause: undiluted KCl on the in-patient unit enabled a nurse to pick it up by mistake, without realizing it was undiluted, and infuse it into a patient. The organizations looked a little deeper and concluded they did not really need undiluted Potassium Chloride (KCl) on the inpatient unit. They removed it from the in-patient unit and required that it be diluted before it leaves the pharmacy.” Doctors and nurses now could not make the error. As you might expect, there have been few, if any, reported KCl deaths since that Sentinel Event Alert. Why would it be appropriate to think that every hospital had to learn this on its own? That is the kind of information everybody should know. What I have described about errors, their causes, and how to prevent them has profound implications for how we think about or treat clinicians who are involved in errors. But, all that I have talked about so far is retrospective. That is, what should happen when an error has occurred? What should a good organization do? I would like to suggest that even in an office practice, when an error occurs, the clinician should do the same thing – do a root-cause analysis rather than just say, “I am fallible. I made a mistake. I hope nothing happened or I’ll talk with the patient if something did happen.” What can be learned that could be shared with others so the mistake does not occur again?
But there is a need to be prospective in designing systems for safety. A prospective approach to safety is characteristic of engineers, but not, quite frankly, of health care professionals. That is, we have a tendency to think when we have figured out how to do something, that it will work okay, and if it does not work okay, either I made an error or it was because of an unpredictable response of the patient. Engineers do not think that way; perhaps they are more pessimistic. When engineers design something they say, “This looks like it should work. What could go wrong?” They spend a lot of time trying to figure out what could go wrong, what would be the effect if it went wrong and what would happen if two or more things went wrong at once? If the effect is critical, they redesign the system, either to change how it works or to build in a redundancy in order to avoid the critical effect. The Joint Commission currently is introducing this proactive approach in its accreditation standards for particularly high-risk areas of health care.
What does all this mean for licensure? I have some thoughts and questions, unfortunately more questions than proposed solutions. Licensure still focuses on the possession of particular knowledge and skills. Our understanding of errors suggests there are knowledge and skills that specifically have to do with how the individual relates to errors, and how he or she relates to the systems within which he or she works. Should licensure continue to focus on things like criminal acts and gross negligence? Absolutely, and those things should obviously get reported to licensure agencies. What about most of the errors that occur? I would like to suggest they should, in general, not be reported to licensure agents, and I’ll return to that in a minute. What about new competencies that will be required to effectively take on the role of creating safety at the sharp end? Medical school does not train to do that today, but if that is one of the roles of the individual clinician, we should be thinking not only in terms of the education of physicians, but also about the competencies we expect them to have when they complete training.
Should the licensure agency routinely get all reports of errors? Not if the result is going to be punitive, because that would result in hiding of errors rather than discovery of them. On the other hand, should licensure agencies get de-identified reports about errors? I think yes, because that tells licensure agencies where some of the patient safety issues are that should be the focus of licensure examinations. Arguably, it might be best for the discovery and reporting of errors, and therefore best for the safety of patients, if reporting of errors were confidential even if we erred on the side of occasionally letting somebody who is making too many errors because of incompetence continue to practice. Of course, such an oversight system would not be credible to the public. Some have suggested there may be a small list of errors that it should be mandatory to report, potentially to regulatory agencies. Some errors are clear-cut, almost always preventable and should never occur, like wrong-site surgery. Reporting of smaller sets of events may enhance the public’s trust in the quality oversight system so they will trust it, even if not all errors are publicly disclosed.
Let me close with just two thoughts about upcoming issues. We have obviously heard a lot about the issue of changing technology and science. I would suggest that all new technologies bring new dangers with them; sometimes dangers that are preventable. As we employ computer systems to help us with our failing human memories, one type of problem can occur when computers are programmed incorrectly, thus creating errors that will be replicated for every patient, rather than just affecting one patient a clinician is treating when he or she makes an error. There will be a new set of errors associated with technology. The second issue has to do with telehealth. Telehealth is a new area in which we have to think about the blunt end as everything that is between the patient and the clinician. What are the risks that occur within these systems, and how can variation be reduced in the telehealth system so that the clinicians’ role in the telehealth system will be able to create safety for the individual patient at the other end of the electronic connection?
This article was reprinted from Medical Licensure in the 21st Century: Symposium Proceedings Sept. 6–7, 2000, Washington, D.C. The symposium brought together leaders from medicine, medical education and government to help define the future of medicine, so that licensure will accurately reflect medical education and practice. You may order the hardcover book by calling (817) 868-4076, or from the Federation website at http://www.fsmb.org (from the home page, select the “Publications” link and then the “Order Form” link).





