CPIA Module 12, Section 5: Case Study – Reducing Alert Overrides
MODULE 12: KPI & QUALITY IMPROVEMENT ANALYTICS

Section 5: Case Study – Reducing Alert Overrides

A practical, in-depth case study applying all the module’s concepts to tackle a real-world problem: using the PDSA cycle and KPI tracking to reduce alert fatigue and improve the effectiveness of clinical decision support.

SECTION 12.5

Case Study: Reducing Alert Overrides

From Theory to Practice: A Step-by-Step Guide to Fixing Alert Fatigue.

12.5.1 The “Why”: The Capstone Challenge

Throughout this module, we have assembled a powerful toolkit for quality improvement. We’ve learned the art of defining meaningful KPIs, the science of testing changes with PDSA cycles, the strategic necessity of benchmarking, and the communication power of a well-designed dashboard. Now, it is time to put it all together. This case study is the capstone of your learning, the final exam where you will apply every concept from this module to solve one of the most persistent, frustrating, and dangerous problems in all of clinical informatics: alert fatigue.

Alert fatigue is the cognitive state of exhaustion and desensitization that occurs when clinicians are exposed to an excessive number of low-value, irrelevant, or non-actionable alerts from the EHR. When providers are forced to navigate a minefield of “nuisance” alerts, they develop a dangerous habit: they start overriding all alerts, including the critically important ones that are designed to prevent catastrophic harm. The very tool that was created to enhance safety becomes a source of risk. It’s a classic example of a good intention leading to a disastrous unintended consequence.

Tackling alert fatigue is the perfect case study because it sits at the intersection of clinical practice, human factors engineering, and data analytics. It cannot be solved with a simple technical fix. It requires a deep understanding of workflow, a respect for the cognitive load of clinicians, and a rigorous, data-driven approach to improvement. It is a problem that requires you to be more than just a data analyst; you must be a clinical detective, a project manager, a diplomat, and a storyteller.

In this section, we will follow a pharmacy informatics analyst as they are tasked by their hospital’s leadership with “fixing the alerts.” We will walk, step-by-step, through their journey, from the initial, vague complaint to the development of a house-wide dashboard that proves the value of their interventions. You will see firsthand how the theoretical frameworks we have discussed are applied to a messy, real-world problem to produce a tangible improvement in patient safety.

Retail Pharmacist Analogy: The “Stomach Upset” Allergy Alert

You are working in a busy retail pharmacy. A patient comes to the counter to buy a bottle of OTC ibuprofen. At the point of sale, your system requires you to enter the patient’s name. When you do, a massive, red, hard-stop alert fills the screen: “ALLERGY WARNING: Patient has a documented allergy to NSAIDs. DO NOT DISPENSE.”

You, as a diligent pharmacist, click to view the details of the allergy. The description reads: “Ibuprofen – Causes upset stomach.” The patient tells you, “Oh yeah, I get a little heartburn if I take it on an empty stomach, so I just take it with food.” This is not a true allergy. It is not an anaphylactic reaction. It is a common, manageable side effect. Yet, the system treats it with the same level of severity as a life-threatening allergy.

What do you do? You override the alert. What do you do the next time the patient comes in and the same alert fires? You override it again, but this time a little faster. What do you do after you have seen this same, clinically irrelevant alert fire a hundred times for a hundred different patients with “stomach upset” listed as their “allergy”? You stop reading the details. Your brain develops a pattern: Red Box -> Override Button. Click. Click. Done.

Now, imagine one day a patient comes in for whom the alert reads: “ALLERGY WARNING: Patient has a documented allergy to NSAIDs. DO NOT DISPENSE.” You, now conditioned by a thousand meaningless alerts, reflexively click the override button. But this time, the detail you didn’t read said: “Ibuprofen – Caused anaphylaxis and hospitalization.” You have just dispensed a potentially fatal medication because the system trained you, through a constant barrage of low-value noise, to ignore its most critical signals. This is alert fatigue. Your challenge as an informatics pharmacist is to re-tune the system so that the signal can be heard above the noise.

12.5.2 Phase 1: Quantifying the Crisis (Applying KPI Principles)

Our case study begins at General Hospital. The Chief Medical Information Officer (CMIO) calls a meeting with our hero, Alex, a new pharmacy informatics analyst. The meeting starts with a familiar refrain: “Alex, the providers are going crazy. They say they’re drowning in useless alerts from the system. They’re spending more time clicking boxes than taking care of patients. We need to do something about this. I need you to fix the alerts.”

Alex, having just completed Module 12 of the CPIA program, knows that “fix the alerts” is a vague aspiration, not an actionable goal. Their first job is to translate this frustration into a data-driven problem statement. They must resist the pressure to immediately start making changes and instead begin with a period of deep, diagnostic data exploration.

Step 1: The Initial Data Pull

Alex’s first request to the IT reporting team is simple and broad: “Can you please provide me with a raw data extract from our CDS system log for the last 90 days? I need to see every alert that fired, the date/time, the specific rule that triggered it, the user who received it, their department, and the action they took (e.g., ‘Accepted’ or ‘Overridden’).”

The result is a massive spreadsheet with over 2 million rows. This is the “data lake.” It’s full of potential, but in its current state, it’s useless to the CMIO. Alex’s job is to turn this lake into a bottle of clean drinking water.

Step 2: From Raw Metrics to a Prioritized List

Using a data analysis tool (like Excel pivot tables, SQL, or a BI tool like Tableau), Alex aggregates the 2 million rows of data to answer some basic questions:

  • What is the total number of alerts firing per day, on average?
  • What is the overall alert override rate across the entire system?
  • Which specific alert rules are firing most frequently?
  • For those top-firing alerts, what are their individual override rates?

This analysis yields the first crucial deliverable: a prioritized list of “problem alerts.” This transforms the vague complaint of “too many alerts” into a specific, data-backed list of top offenders.

Masterclass Table: Initial Alert Analysis Results
Rank (by Volume) Alert Rule Name Total Firings (Last 90 Days) Override Rate Initial Clinical Assessment
1 Duplicate Therapy: Potassium Chloride 152,345 99.2% Fires when KCL is in the IV fluid and also ordered as a separate K-rider. Clinically low-value in most cases.
2 Patient is on a Statin 98,102 100% Informational only. No action required. Pure noise.
3 Renal Dosing Adjustment: Gabapentin 75,432 78.0% Clinically important, but perhaps the logic is too sensitive or the suggestions are unclear.
4 DDI: Warfarin – SMX/TMP 68,990 96.5% Potentially Catastrophic. A very high override rate for a critical interaction is a major safety concern.
5 IV to PO Conversion Available: Pantoprazole 55,123 91.3% Cost-saving/stewardship alert, but likely firing at the wrong time (e.g., on NPO patients).
(Other alerts)
The “Aha!” Moment

This table is a revelation. Alex can now go back to the CMIO and change the conversation. Instead of talking about a vague feeling of “too many alerts,” they can say: “The data shows we are firing over 20,000 alerts per day, with an overall override rate of 88%. However, the top 5 alerts account for over 25% of our total alert volume. Specifically, the ‘Duplicate Potassium’ and ‘Patient is on a Statin’ alerts are almost pure noise and contribute significantly to alert fatigue. More alarmingly, we have a 96.5% override rate on the critical Warfarin-Bactrim interaction alert.”

They haven’t solved anything yet, but they have successfully defined and scoped the problem with data. They have identified specific, high-yield targets.

Step 3: Defining the SMART KPI for the Project

Alex knows that “reduce alert overrides” is not a SMART goal. They need to focus. Given the data, the most urgent safety risk is the Warfarin-SMX/TMP alert. This will be the target for their first improvement project. They work with the CMIO and the Pharmacy & Therapeutics (P&T) committee to define a formal goal.

The SMART KPI: “Reduce the override rate for the high-severity ‘Warfarin – SMX/TMP’ drug-drug interaction alert from its current baseline of 96.5% to a rate of less than 50% within the next six months.”

  • Specific: Targets one, and only one, high-risk alert.
  • Measurable: The override rate (Overrides / Total Firings) is a clear, quantifiable metric.
  • Achievable: A 50% target is ambitious but realistic. It acknowledges that some overrides are clinically appropriate (e.g., patient has a documented sulfa allergy, SMX/TMP is the only option, and a plan for close INR monitoring is in place).
  • Relevant: Preventing this specific, well-known interaction has a direct and significant impact on patient safety.
  • Time-bound: A six-month timeframe creates a clear deadline and sense of urgency.

12.5.3 Phase 2: From Diagnosis to Hypothesis (Applying PDSA Planning)

With a clear, data-driven target established, Alex moves from the “what” to the “why.” Why are providers overriding this critical alert 97% of the time? To answer this, they must move beyond the quantitative data and engage with the frontline clinicians. This is the beginning of the Plan phase of the PDSA cycle.

Step 1: Assemble the Team and Conduct Root Cause Analysis

Alex convenes a small, multidisciplinary workgroup. This is crucial—an analyst cannot solve a clinical problem alone. The team includes:

  • Alex (Pharmacy Informatics Analyst): The data expert and project facilitator.
  • Dr. Chen (Hospitalist Physician Champion): Provides the prescriber perspective and clinical credibility.
  • Sarah (Clinical Pharmacist Specialist): The medication expert who understands the nuances of anticoagulation.
  • David (EHR Application Analyst): The technical expert who knows what is possible to change in the system’s rule logic.

In their first meeting, Alex presents the data from Phase 1. Dr. Chen’s reaction is immediate: “Of course we override it. It fires all the time, even on patients who have been on both drugs for years from their outpatient doctor. It’s useless.” This qualitative insight is the key. The team uses a “5 Whys” exercise to drill down:

  1. Why are providers overriding the alert? -> “Because it’s not helpful.”
  2. Why is it not helpful? -> “Because it fires on patients who are already on both drugs and are stable.”
  3. Why does it fire on these patients? -> “Because the alert logic is simple: it just checks if there’s an active order for warfarin AND an active order for SMX/TMP.”
  4. Why is that a problem? -> “Because the real risk is when you START a patient on SMX/TMP who is already on a stable dose of warfarin. The alert doesn’t distinguish between a new order and a chronic one.”
  5. Why is that distinction important? -> “Because if it’s a new order, I need to stop and think about a plan for closer INR monitoring. If it’s a chronic combo from an outside provider, the patient has likely already been stabilized on it. The alert is just noise in that case.”

Step 2: Formulate a Hypothesis and Design the Test

The root cause analysis leads directly to a testable hypothesis:

Hypothesis: “We predict that if we modify the Warfarin-SMX/TMP alert logic to only fire when SMX/TMP is a new order for a patient with an existing order for warfarin, we will reduce the alert override rate because the alert will be more targeted, more novel, and more clinically actionable.”

Now, Alex leads the team in completing a formal PDSA Planning Worksheet to architect the first small-scale test.

Masterclass Deep Dive: PDSA Cycle 1 Planning Worksheet
PDSA Worksheet: Warfarin-SMX/TMP Alert Logic v1

1. Aim Statement: Reduce the override rate for the Warfarin-SMX/TMP alert from 96.5% to < 50% within 6 months.

2. Description of Change to be Tested: Modify the alert logic to add a “lookback” period. The alert will only fire if the SMX/TMP order is new within the last 24 hours AND the patient has an active warfarin order that has existed for > 24 hours.

3. Prediction (Hypothesis): We predict this change will decrease the override rate for the pilot group to approximately 60%, as it will eliminate the majority of alerts firing on patients with chronic, concurrent therapy.

4. Plan for the Test (The “Do”):

  • Who: The new alert logic will be enabled for Dr. Chen and two other volunteer hospitalists (the “pilot group”). All other providers will remain on the old logic (the “control group”).
  • When: The test will run for two weeks, from Monday, Jan 6th to Friday, Jan 17th.
  • Where: The change will be active for the pilot group across all inpatient units.

5. Data Collection Plan:

  • Quantitative: Alex will track the daily alert fire count and override rate for both the pilot and control groups.
  • Qualitative: At the end of the two weeks, Alex and Sarah will conduct 10-minute structured interviews with the three pilot physicians to gather their feedback on the alert’s usefulness and any unintended consequences.

12.5.4 Phase 3: Iteration and Refinement (Applying PDSA Do-Study-Act)

With a solid plan in place, the team moves into the execution and learning phases of the cycle. This is where the iterative power of the PDSA model shines, allowing for rapid learning and refinement based on real-world evidence.

The “Do” & “Study” Phase of Cycle 1

The two-week test is carried out. During this time, Alex monitors the data stream and checks in briefly with the pilot physicians to ensure there are no major technical issues. At the end of the two weeks, Alex pulls the data and analyzes the results.

Cycle 1 – Study Phase Results:

Quantitative Analysis: “The new logic was highly effective at reducing alert volume. The pilot group received only 12 alerts in two weeks, compared to an estimated 150 they would have received with the old logic. For the pilot group, the override rate was 33% (4 overrides out of 12 firings). For the control group during the same period, the override rate remained at 97%.”

Comparison to Prediction: “Our prediction that the override rate would decrease was correct. The actual rate of 33% far exceeded our prediction of 60%, indicating the change was even more effective than anticipated.”

Qualitative Synthesis: “The physician feedback was extremely positive. Dr. Chen stated, ‘It was night and day. I barely saw the alert, but when I did, I knew it was important because it was for a patient I was just starting on Bactrim. I actually stopped and put in an order for daily INRs.’ One of the other pilot physicians offered a key suggestion: ‘This is great. What would make it perfect is if the alert could also check if I’ve already ordered a new INR in the last hour. Sometimes I remember to do it right before I order the antibiotic, and it’s annoying to get an alert for something I just fixed.'”

The “Act” Phase of Cycle 1 and the Plan for Cycle 2

The results are a clear success, but the qualitative feedback provides an opportunity for even greater improvement. The team’s decision is to Adapt.

Act: The core change (distinguishing new vs. chronic therapy) is validated. The team decides to incorporate the feedback from the pilot physician into the next iteration of the alert logic.

Plan for Cycle 2:
Change to be Tested: “We will modify the alert logic further. In addition to the ‘new order’ logic from Cycle 1, we will also add a condition to suppress the alert if the provider has placed a new order for an INR within the past 60 minutes.”
Prediction: “We predict this will maintain the low alert volume while further increasing the actionability, potentially lowering the override rate to below 25%.”
Test Plan: “We will expand the pilot group to include the entire Hospitalist service (25 providers) and run the test for four weeks.”

This demonstrates the iterative nature of the process. Rather than rolling out their first success, the team uses the learnings to make the intervention even smarter before expanding it to a larger group. This process might continue for several cycles, with each one refining the logic or expanding the test group, systematically reducing risk before a house-wide implementation.

12.5.5 Phase 4: Sustaining the Gains (Applying Dashboarding & Benchmarking)

After two more successful PDSA cycles, the team is confident in their refined alert logic. The change is approved by the P&T and informatics governance committees and is rolled out house-wide. The project is a success, but Alex’s job is not over. The final phase is to monitor the change, ensure the gains are sustained, and communicate the results to leadership. This is where dashboarding and benchmarking become essential.

Building the Alert Optimization Dashboard

Alex creates a new dashboard in the hospital’s BI tool, designed specifically for the CMIO and the P&T Committee. It follows the blueprint from Section 12.4, telling a clear story of the project’s success.

Pharmacy CDS Optimization Dashboard
WARFARIN-SMX OVERRIDE %

18%

Target: < 50% | Pre: 96.5%

ALERTS / 1000 ORDERS

0.8

Pre-Intervention: 12.4

PROVIDER ACCEPTANCE %

82%

Pre-Intervention: 3.5%

EST. ADES AVERTED (YTD)

~4

Based on literature rates

Warfarin-SMX/TMP Override Rate Trend Shows Sustained Improvement Post-Intervention

Analysis of the Dashboard’s Success

This dashboard works because it follows all our design principles. A leader can look at it for five seconds and understand the story: the override rate is green, well below target. The volume of alerts has plummeted. The trend chart proves that the intervention caused a dramatic and sustained improvement. This is a visual success story.

The Final Step: Benchmarking for External Validation

The CMIO is ecstatic with the dashboard. The project is a clear win. Their final question to Alex is: “This is fantastic, but how do we know if 18% is truly a good number? Where does this put us compared to other hospitals?”

Alex uses their hospital’s subscription to a clinical informatics benchmarking service. They find that for hospitals who have also implemented advanced logic for this specific alert, the top quartile performance for override rates is between 15-25%.

Alex can now add one final, powerful component to their report for the C-suite: “The Warfarin-SMX/TMP alert optimization project has successfully reduced our override rate from 96.5% to 18%. This not only represents a significant internal improvement in patient safety but also elevates our performance on this key metric into the top quartile nationally, demonstrating best-in-class clinical decision support.”

12.5.6 Conclusion: The Analyst as an Agent of Change

This case study has taken us on the complete journey of a quality improvement project, from start to finish. We have seen how a vague, frustration-based complaint (“too many alerts”) was transformed into a data-driven, scientific, and ultimately successful initiative. This process is the core work of a pharmacy informatics analyst.

Let us recap the roles our analyst, Alex, had to play at each stage:

  • As a Data Scientist (KPIs): They translated a qualitative problem into a quantitative one, using data to identify and prioritize the most critical area for intervention.
  • As a Clinical Scientist (PDSA): They applied the scientific method to operations, formulating a testable hypothesis and using small-scale, iterative cycles to test and refine a solution while minimizing risk.
  • As a Storyteller (Dashboards): They used the principles of data visualization to create a clear, compelling narrative of the project’s success, communicating the value of their work to leadership in a way that was instantly understandable.
  • As a Strategist (Benchmarking): They placed the project’s success in a national context, proving not just that they had improved, but that they had achieved a level of performance consistent with top-tier organizations.

The journey from “fixing the alerts” to “achieving top-quartile performance” was not accomplished through a single, brilliant technical fix. It was accomplished through the rigorous and disciplined application of the quality improvement frameworks covered in this module. This is your blueprint. Whether you are tackling alert fatigue, improving turnaround times, reducing ADEs, or optimizing inventory, this systematic approach—Define, Measure, Test, Learn, and Report—will be the key to your success as a Certified Pharmacy Informatics Analyst and as a true agent of change within your organization.