CPIA Module 19, Section 5: Case Study – Governance Failure Analysis
MODULE 19: GOVERNANCE, POLICY & ETHICS

Section 19.5: Case Study – Governance Failure Analysis

Learn from the mistakes of others. We will perform a root cause analysis on a real-world case of informatics governance failure, dissecting how a lack of oversight led to patient harm and what structures could have prevented it.

SECTION 19.5

Case Study: Governance Failure Analysis

Dissecting a Catastrophe to Build a Culture of Safety.

19.5.1 The “Why”: The Proactive Power of Learning from Failure

In healthcare, we are trained to strive for perfection. As a pharmacist, your entire career has been built on a foundation of precision, accuracy, and the relentless pursuit of zero errors. Yet, despite our best efforts and intentions, mistakes happen. Systems fail. Patients are harmed. In these moments, the character of an individual and the maturity of an organization are revealed. A low-reliability organization responds with blame, seeking to punish the individual who made the final error in a long chain of system failures. A high-reliability organization, in contrast, responds with a deep, almost obsessive curiosity. It asks not “Who did this?” but “Why did our system allow this to happen?” This fundamental shift in perspective is the heart of a just culture and the engine of all meaningful safety improvement.

Nowhere is this more critical than in health informatics. A flawed piece of clinical software is not like a single misfilled prescription that affects one patient at one point in time. A flawed clinical decision support (CDS) alert or a poorly designed order set is a latent error—a hidden trap—that can be replicated hundreds or thousands of times, endangering countless patients until it is discovered. For this reason, we cannot afford to wait for our own failures to learn. We must become students of failure, actively seeking out and dissecting the mistakes of others to proactively identify and mitigate the same vulnerabilities within our own systems. Performing a rigorous Root Cause Analysis (RCA) on a governance failure is not an academic exercise; it is an essential professional competency for any informatics leader.

This section is designed to be your masterclass in that process. We will step into the role of safety investigators and perform a detailed forensic analysis of a fictional but highly realistic case study: a patient harm event caused not by a single clinician’s mistake, but by a catastrophic breakdown in informatics governance. We will peel back the layers of this failure, moving from the sharp end of the error—the harm itself—to the blunt end, where the flawed decisions were made months or years earlier. By understanding how a series of seemingly small deviations from good governance can cascade into a tragedy, you will learn to recognize the early warning signs in your own organization. This deep dive will equip you with the analytical tools to not only respond to failures but, more importantly, to build the resilient, well-governed systems that prevent them from ever happening in the first place.

Retail Pharmacist Analogy: The Catastrophic “Speed Up” Initiative

Imagine your national pharmacy chain’s corporate leadership, under pressure to improve profits, announces a new initiative: “FillSafe in Five.” The goal is to reduce the average prescription fill time to under five minutes. A small team at corporate, consisting of executives and IT developers but no practicing pharmacists, is tasked with modifying the dispensing software to enforce this goal.

  • The Governance Failure: The project is never vetted by any clinical governance body. The corporate P&T committee isn’t consulted. The regional clinical advisory boards of pharmacy managers are not included. It’s a top-down mandate driven by finance, not safety.
  • The Flawed “Solution”: The IT team, under orders to “reduce clicks,” makes several changes. They remove the mandatory hard stop that forced pharmacists to view a patient’s allergy profile before verification. They disable the drug-image verification screen to save time. Most dangerously, they re-program the dosing alerts; to reduce “alert fatigue,” they decide that alerts will only fire on doses that are more than 10 times the normal maximum dose.
  • The Rollout: The new software is pushed out to all 5,000 stores overnight with a single email announcement. There is no formal training, no pilot testing at a small group of pharmacies, and no opportunity for feedback.
  • The Tragedy: A week later, a busy pharmacist, under pressure to meet the new “five-minute” metric, receives a prescription for amoxicillin for a child. The doctor has accidentally written “2500 mg” instead of “250 mg.” The 10x dosing alert, now re-programmed to only fire on doses >4000 mg for amoxicillin, remains silent. The pharmacist, skipping through screens to beat the clock, verifies the prescription. The child receives a massive overdose and suffers acute kidney failure.

The Root Cause Analysis: Who is at fault? The pharmacist who made the final check? The technician who filled it? Or the system of flawed governance that created the conditions for the error to be inevitable? A true RCA would point directly to the lack of clinical oversight, the bypassing of established safety procedures, and a culture that prioritized speed over safety. The error was not the cause; it was the symptom. This case study will apply that same forensic lens to a hospital informatics failure.

19.5.2 The Case of the Silent Alert: A Governance Failure at General Hospital

To understand the anatomy of a governance failure, we will examine the story of General Hospital, a respected 300-bed community hospital, and their well-intentioned but ultimately disastrous project to improve the safety of Direct Oral Anticoagulants (DOACs).

The Initial Spark: A Good Catch and a Good Idea

The story begins, as many safety initiatives do, with a “good catch.” An experienced renal pharmacist, Dr. Lena Hanson, is reviewing orders for a 78-year-old patient with Stage 4 Chronic Kidney Disease (CKD), whose creatinine clearance (CrCl) was calculated at 22 mL/min. The hospitalist had ordered the standard dose of apixaban, 5 mg twice daily, for atrial fibrillation. Dr. Hanson recognized that the FDA-approved labeling for apixaban calls for a dose reduction to 2.5 mg twice daily in patients with at least two of the following criteria: age ≥ 80 years, body weight ≤ 60 kg, or serum creatinine ≥ 1.5 mg/dL. While this patient only met one criterion (SCr > 1.5), Dr. Hanson’s clinical judgment, supported by various clinical guidelines, suggested that the full dose was likely unsafe given the patient’s very poor renal function.

She contacted the hospitalist, who readily agreed to the dose reduction, and she filed a safety report documenting the near-miss. In the report, she wrote, “This was a potentially significant overdose averted by pharmacist intervention. The EHR provided no warning to the prescriber about the potential need for a DOAC dose adjustment in this renally impaired patient. We should build a CDS alert to prevent this.”

The Rogue Project: The “Skunkworks” Solution

The hospital’s Chief of Hospital Medicine, Dr. Alan Reed, saw Dr. Hanson’s report. Dr. Reed was a technology enthusiast and a hospital leader known for “getting things done.” Frustrated by what he perceived as the slow pace of the formal IT governance process, he decided to fast-track a solution. He approached a talented but junior EHR analyst in the IT department, David Chen, directly.

“David,” Dr. Reed said, “I need an alert. A simple one. If a physician orders apixaban, rivaroxaban, or edoxaban, and the patient’s most recent creatinine clearance is less than 30 mL/min, I want a pop-up that says ‘Severe renal impairment detected. Consider dose adjustment or alternative anticoagulant.’ Can you build that for me?”

David, eager to please a senior physician leader, agreed. He saw it as a simple, straightforward request. Dr. Reed and David worked on the logic together over a few days. They did not consult the Pharmacy & Medication Safety CAG. They did not involve nursing informatics. They did not present the proposal to any formal governance body. They believed they were being agile and responsive, cutting through bureaucratic red tape to implement a needed safety improvement. They called it the “DOAC Task Force,” a team of two.

The Flawed Build and the “Paper” Test

David built the alert. The logic seemed simple on the surface. However, he made a critical, and invisible, technical error. The EHR had two different fields for creatinine clearance: one that was automatically calculated by the system based on lab values, and another that could be manually entered by a pharmacist in a clinical note. David’s alert logic, for technical reasons he didn’t fully understand, only pulled from the manually entered field. He was unaware that this field was only used by pharmacists in about 30% of cases; for 70% of patients, the field was blank, even if the system-calculated value was readily available elsewhere in the chart.

To “test” the alert, David pulled up a test patient, manually entered a CrCl of 15 mL/min into the pharmacist note field, ordered apixaban, and the alert fired correctly. Dr. Reed reviewed this test on David’s computer screen and approved it. “Looks perfect,” he said. “Let’s get it live next week.” They submitted the change request to the Change Control Board as an “expedited” request, with Dr. Reed’s signature as the clinical sponsor. The CCB, seeing the signature of a senior physician leader and a seemingly simple technical change, approved it for deployment without questioning the lack of formal CAG approval.

The Silent Failure and the Tragic Outcome

Two months after the “silent” go-live (announced only in a low-priority IT newsletter), Mr. Charles Davis, an 82-year-old male weighing 58 kg, was admitted for pneumonia. His baseline serum creatinine was 1.8 mg/dL, and the EHR’s lab system automatically calculated his CrCl to be 24 mL/min. He had a history of atrial fibrillation, and the admitting physician ordered apixaban 5 mg twice daily.

The physician, Dr. Miller, placed the order. The alert did not fire. Because no pharmacist had manually entered a CrCl into the specific note field the alert was looking for, the alert’s logic found a null value and did not trigger. Dr. Miller, a busy resident, assumed that the absence of an alert meant the dose was appropriate for the patient’s renal function. The order was verified by a busy evening-shift pharmacist, who also assumed that any critical dosing guidance would be handled by the EHR’s advanced decision support.

Mr. Davis received apixaban 5 mg twice daily for three days. On the fourth day, he had a massive gastrointestinal bleed. He was transferred to the ICU, required multiple blood transfusions, and ultimately suffered a stroke related to hemorrhagic shock. He survived but was left with permanent neurological deficits and required discharge to a skilled nursing facility.

19.5.3 The Root Cause Analysis (RCA): Deconstructing the Failure

Mr. Davis’s tragic outcome triggered an immediate, high-priority safety investigation. The initial reaction from some leaders was to blame Dr. Miller for not manually checking the dose or the verifying pharmacist for missing the error. However, the hospital’s patient safety officer, guided by the principles of a just culture, insisted on a formal Root Cause Analysis to understand the systemic failures that allowed the error to occur. As the lead informatics pharmacist, you are tasked with co-leading this RCA.

The RCA team’s first step is to visually map out the contributing factors using a fishbone diagram (also known as an Ishikawa diagram). This tool helps to move beyond the single “cause” and explore the multiple latent failures across different domains that all contributed to the adverse event.

Fishbone Diagram: The Systemic Failures Leading to the ADE

Problem:
Patient Harm from
DOAC Overdose
Governance & Oversight
  • “Skunkworks” project bypassed all formal governance.
  • No review by I&T Steering Committee.
  • Pharmacy & Med Safety CAG was not consulted.
  • CCB rubber-stamped based on requester’s authority.
Policy & Procedure
  • No institutional policy for design, testing, approval of CDS.
  • No standardized testing protocol for clinical alerts.
  • Change management policy not followed (no CAG approval).
People & Communication
  • Lack of multi-disciplinary team (only MD + analyst).
  • “Go-live” not communicated effectively to clinicians.
  • Over-reliance on technology (“The computer will catch it”).
  • Culture of rewarding “cutting through red tape”.
Technology & Design
  • Flawed technical logic (wrong CrCl field).
  • Inadequate testing (single “happy path”).
  • Alert not fail-safe (null value = no alert).
  • No analytics plan to monitor alert performance post-go-live.

19.5.4 Masterclass Deep Dive into Each Governance Failure Point

The fishbone diagram gives us the “what,” but the real learning comes from a deep exploration of the “why” and “how.” We must now dissect each of the major domains of failure to understand the precise nature of the breakdown and, most importantly, the specific governance structures that would have prevented it.

Failure Domain 1: Governance & Oversight

This was the primary, foundational failure from which all other errors stemmed. The “DOAC Task Force” was an unsanctioned, rogue project that operated completely outside the established architecture of institutional decision-making. This circumvention of governance was not just a procedural misstep; it was a fatal flaw that eliminated every safety net the organization had painstakingly put in place.

The Seductive Danger of “Agility”

Dr. Reed and David likely believed they were being innovative and efficient. They were “moving fast and breaking things.” While agility is a prized attribute in software development, in health informatics, process is safety. The formal governance structure is not “red tape”; it is a series of mandatory, multi-disciplinary peer reviews designed to stress-test ideas, uncover hidden assumptions, and prevent the exact kind of error that occurred. The belief that a small, homogenous team can replace the collective wisdom of a formal, multi-disciplinary CAG is a common and dangerous hubris.

How Proper Governance Would Have Prevented the Error
Governance Body What Should Have Happened How It Would Have Averted the Tragedy
Pharmacy & Medication Safety CAG Dr. Reed’s idea for an alert should have been submitted as a formal request to this committee. The committee, co-chaired by an informatics pharmacist, would have placed it on their agenda for discussion. This single step would have likely prevented the entire event. During the CAG meeting, the following would have occurred:
  • Clinical Nuance: Dr. Hanson (the renal pharmacist) would have immediately pointed out that DOAC dosing is complex and not based on a single CrCl cutoff. The criteria involve age, weight, and specific CrCl ranges that differ for each drug. The group would have rejected the “simple” alert logic as clinically incorrect and unsafe.
  • Technical Expertise: You, as the informatics pharmacist, and the IT analyst would have questioned the technical source of the CrCl value, leading to the discovery that the proposed field was unreliable.
  • Nursing Workflow: The nursing representatives would have asked critical questions about alert fatigue and how this new alert would fit into their workflow.
I&T Steering Committee While this specific alert build would not typically rise to the level of the Steering Committee, a larger “DOAC Safety Initiative” might have. This committee would have reviewed the project’s goals, budget, and required resources. The Steering Committee provides top-down reinforcement of the governance process. If Dr. Reed had tried to get resources for his project outside the formal process, the CIO on the committee would have redirected him, stating, “This is a great idea, Alan, but it needs to follow our established governance pathway. Please submit it to the Pharmacy CAG for clinical vetting first, and then we can discuss resource allocation.” This enforces the rules for everyone.
Change Control Board (CCB) The CCB’s primary function is to be the final operational gatekeeper. Seeing a request for a new, high-risk CDS alert, the CCB manager should have immediately checked for the required prerequisite: formal approval from the relevant clinical governance body. A robust CCB acts as a final, crucial safety check. Upon seeing the request, the manager should have told David, “We cannot proceed with this change. The request lacks documented approval from the Pharmacy & Medication Safety CAG. Please obtain that approval and resubmit.” This would have forced the project back into the correct, safe pathway. The CCB’s failure was in deferring to the authority of the requester rather than adhering to its own process.