CPIA Module 7, Section 4: Performance Monitoring and Audit Logs
MODULE 7: Building & Maintaining Clinical Decision Support (CDS) Rules

Section 7.4: Performance Monitoring and Audit Logs

A guide to post-implementation surveillance. We’ll cover how to analyze audit logs to measure your rule’s impact: How often does it fire? How often do clinicians accept the guidance? Is it causing alert fatigue?

SECTION 7.4

Performance Monitoring and Audit Logs

From “Go-Live” to Guideline: The Science of Post-Implementation Surveillance.

7.4.1 The “Why”: The Pharmacist’s Duty to Monitor

As a pharmacist, your responsibility to the patient does not end when you dispense a medication. In many ways, it has just begun. The period after dispensing is when you perform one of your most critical functions: therapeutic monitoring. You follow up with the patient to ask: Is the medication working? Are you experiencing any side effects? Are you taking it correctly? You check lab values to assess efficacy and toxicity. This continuous loop of action, assessment, and adjustment is the hallmark of excellent pharmaceutical care. It ensures that the therapy which was appropriate at the time of dispensing remains safe and effective over its entire lifecycle.

Activating a new Clinical Decision Support (CDS) rule—the “go-live”—is the informatics equivalent of dispensing that first dose. And just like with a medication, your professional duty does not end there. In fact, a new and equally critical responsibility begins: performance monitoring. You must now ask of your rule the same questions you would ask of a drug therapy: Is it working as intended? Is it causing any unintended side effects (like alert fatigue)? Are clinicians “adherent” to its recommendations? The practice of analyzing CDS audit logs is the direct translation of your therapeutic monitoring skills into the informatics domain.

Ignoring this post-implementation surveillance is a form of professional negligence. A CDS rule that seemed perfect in the controlled environment of testing can have unforeseen and dangerous consequences in the chaotic reality of live clinical practice. It can under-fire, missing opportunities to prevent harm. It can over-fire, drowning clinicians in noise and eroding their trust in all alerts. It can be misinterpreted, leading to confusion. Without a robust monitoring program, your digital safety net can quickly decay into a digital nuisance or, worse, a source of iatrogenic error. This section will equip you with the data analysis skills to become a vigilant digital pharmacist, one who not only builds rules but also meticulously manages their long-term therapeutic outcomes within the EHR.

Retail Pharmacist Analogy: The Therapeutic Interchange Program Audit

Imagine your pharmacy chain implements a new, system-wide therapeutic interchange program to automatically switch from a brand-name ARB to losartan to save costs. The logic is simple: “IF a prescription is for olmesartan, THEN prompt the pharmacist to switch to the preferred generic, losartan.” You helped design this program (the CDS rule).

A month after go-live, your Director of Pharmacy asks, “How is the new ARB program going?” You wouldn’t just say, “It seems fine.” You would conduct an audit. You would go into the pharmacy system’s transaction history (the audit log) and start pulling data to answer specific questions:

  • How often did the prompt appear? (Fire Rate) You find it fired 500 times.
  • How often did we actually switch the patient to losartan? (Acceptance Rate) You find that in 450 of those cases, the pharmacist accepted the switch. That’s a 90% acceptance rate.
  • What happened in the other 50 cases? (Override Analysis) You dig into the dispense notes for those 50 prescriptions. You find that 40 had a “DAW-1” code from the prescriber. In 5 cases, the patient’s insurance actually had a lower copay for the brand. In the final 5, the pharmacist noted “patient refused.”

You have just performed a complete CDS performance audit. You used the raw data from the audit log to calculate key performance metrics (KPIs). You analyzed the reasons for non-adherence and identified opportunities for improvement (e.g., “Perhaps our rule should automatically suppress the prompt if the Rx is coded DAW-1.”). This process of querying data, calculating metrics, and deriving actionable clinical insights is the exact workflow you will use to monitor the performance of every CDS rule you build in the hospital EHR.

7.4.2 The Source of Truth: Anatomy of a CDS Audit Log

Every time a CDS rule is evaluated by the EHR, a digital footprint is created. This collection of footprints is the audit log. It is a vast, highly structured database that records every interaction—or potential interaction—between your rule and the end-users. Learning to read and interpret this log is the foundational skill for all performance monitoring. While the exact fields may vary by EHR vendor, all robust audit logs capture the same core concepts.

Each row in an audit log typically represents a single event: a rule evaluation, a rule firing, or a user’s interaction with an alert. It is a treasure trove of raw data waiting to be refined into clinical intelligence.

Masterclass Table: Core Fields in a CDS Audit Log
Field Name Data Type Example Value Purpose & Clinical Significance
Event_ID Unique Identifier ‘A-77B3-4C1F-9A0E’ A unique key for each logged event, allowing you to trace a single alert’s entire lifecycle from evaluation to final user action.
Timestamp_Evaluation Date/Time ‘2025-10-18 08:32:15.120’ The exact moment the rule’s trigger condition was met and its logic was evaluated by the system.
Rule_ID String ‘[PHA] Renal Dosing – Levofloxacin’ The unique name of the rule that was evaluated. This is why a strict naming convention is critical for later analysis.
Patient_ID String ‘MRN1002345’ The medical record number of the patient whose data was being evaluated. This allows you to link alert data to clinical outcomes.
User_ID String ‘SMITHJ’ (John Smith, MD) The ID of the clinician whose action triggered the rule (e.g., the physician signing the order).
Evaluation_Outcome Boolean ‘TRUE’ Did the rule’s logic evaluate to true or false? If ‘FALSE’, the event often ends here (the rule remained silent). If ‘TRUE’, it means the rule “fired” and an action was taken.
Timestamp_Fired Date/Time ‘2025-10-18 08:32:15.340’ The moment the alert was actually presented to the user on their screen. The difference between this and `Timestamp_Evaluation` can indicate system latency.
User_Action String / Code ‘Accept’ The action the user took in response to the alert. This is a critical field. Values can include ‘Accept’, ‘Override’, ‘Cancel’, ‘Acknowledge’.
Timestamp_Action Date/Time ‘2025-10-18 08:32:19.880’ The moment the user interacted with the alert.
Dwell_Time_ms Integer 4540 The difference in milliseconds between when the alert was fired and when the user acted on it. A direct measure of how long the user spent reading and considering the alert.
Override_Reason String / Code ‘Patient has AKI, dosing per Pharmacy consult’ If the user’s action was ‘Override’, this field captures the reason they provided. This is arguably the most valuable data point for understanding why your CDS is not being followed.

7.4.3 Key Performance Indicators (KPIs) for CDS Success

The raw audit log is overwhelming. To make sense of it, you must aggregate this transactional data into meaningful metrics. These Key Performance Indicators (KPIs) are the vital signs of your CDS rule’s health. You will monitor these KPIs over time to diagnose problems, measure the impact of changes, and report on the value of your work to hospital leadership.

KPI 1: The Fire Rate

This is the most basic measure of your rule’s activity. It answers the question: “How often is my rule’s clinical scenario encountered?”

$$ \text{Fire Rate} = \frac{\text{Total Times Rule Fired}}{\text{Total Opportunities for Firing}} $$

The “opportunity” (the denominator) can be defined in different ways depending on the rule. For a medication-specific rule, it might be the total number of orders for that medication. For a hospital-acquired pneumonia rule, it might be the total number of patients admitted with a pneumonia diagnosis. A high fire rate is not necessarily bad, nor is a low one good. It is simply a baseline measure of activity that helps diagnose other issues. For instance, if a rule for a rare drug-drug interaction fires 100 times a day, your logic is almost certainly too broad and needs to be refined.

KPI 2: The Acceptance Rate

This is the single most important measure of your rule’s clinical utility and effectiveness. It answers the question: “Do clinicians agree with and follow my recommendation?”

$$ \text{Acceptance Rate (%)} = \frac{\text{Number of Times Guidance Was Accepted}}{\text{Total Times Rule Fired}} \times 100 $$

A high acceptance rate (typically > 80-90% for well-designed rules) is the ultimate sign of success. It indicates that your rule is identifying clinically relevant scenarios, the recommendation is appropriate, and the alert is designed in a way that is easy for clinicians to act upon. A persistently low acceptance rate is a five-alarm fire; it means you are creating work and interruption for clinicians with little to no benefit, actively contributing to alert fatigue, and eroding trust in the entire CDS system. Investigating the cause of a low acceptance rate is a top priority for any informatics pharmacist.

KPI 3: Override Rate & Reason Analysis

This is the inverse of the acceptance rate, but it requires a deeper qualitative analysis. It answers the question: “When clinicians disagree with my rule, why?”

$$ \text{Override Rate (%)} = \frac{\text{Number of Times Guidance Was Overridden}}{\text{Total Times Rule Fired}} \times 100 $$

Simply knowing the override rate isn’t enough. The gold is in the override reasons. This is where clinicians tell you, in their own words, the specific clinical context where your rule’s logic falls short. A rigorous analysis involves categorizing these free-text or pre-selected reasons into themes. Do 10% of overrides for our levofloxacin alert mention “unstable renal function”? If so, that’s a powerful signal that we need to build a specific exclusion for AKI into our logic. Are users frequently selecting “Other” and typing “Patient preference”? That might indicate a need for patient counseling resources linked from the alert. This analysis is your primary tool for the iterative refinement of your CDS rules.

The Power of Structured Override Reasons

While allowing free-text override reasons is flexible, it makes analysis difficult. A best practice is to provide users with a pre-defined list of common, clinically valid reasons for overriding the alert, along with a final “Other (comment required)” option. For our levofloxacin alert, the options might be:

  • Dosing per ID/Pharmacy/Nephrology consult
  • Unstable renal function (AKI)
  • Benefit of higher dose outweighs risk
  • Patient receiving CRRT/Dialysis
  • Other (comment required)
This gives you structured, immediately analyzable data, making it far easier to spot trends and identify opportunities for improvement.

7.4.4 Masterclass: From Raw Data to Actionable Report

Let’s walk through a realistic scenario. One month after our `[PHA] Renal Dosing – Levofloxacin` rule goes live, we are tasked with presenting its initial performance data to the P&T committee. Our first step is to query the CDS audit log for all events related to our `Rule_ID`.

Step 1: The Raw Data Pull

We receive a raw data export from the analytics team. It’s a massive spreadsheet with thousands of rows. Here is a small, representative sample:

Event_ID Timestamp_Fired User_ID User_Action Dwell_Time_ms Override_Reason
A-77B32025-11-01 09:14:22JONESMAccept3102N/A
A-77B42025-11-01 11:56:03DAVISPOverride8455Patient has AKI, following Nephrology recs
A-77B52025-11-01 15:21:45CHENLAccept4210N/A
A-77B62025-11-02 02:05:11MILLERBOverride6730Benefit of high dose for severe pseudomonal PNA outweighs risk
A-77B72025-11-02 08:30:59SMITHJAccept2980N/A
Step 2: Aggregate the Data & Calculate KPIs

Using spreadsheet functions or a data analysis tool, we aggregate the raw data. We find the following totals for the first month:

  • Total Orders for Levofloxacin (Denominator/Opportunity): 1,250
  • Total Times Rule Fired (`Evaluation_Outcome` = TRUE): 150
  • Total ‘Accept’ Actions: 110
  • Total ‘Override’ Actions: 40

Now, we calculate our primary KPIs:

  • Fire Rate: (150 Fired / 1,250 Orders) = 12%
  • Acceptance Rate: (110 Accepted / 150 Fired) = 73.3%
  • Override Rate: (40 Overridden / 150 Fired) = 26.7%
Step 3: Analyze the Qualitative Data (Override Reasons)

We now perform a thematic analysis of the 40 `Override_Reason` free-text entries.

Override Reason Analysis (n=40)

AKI / Unstable Renal Fxn (18)
Consult Recommendation (10)
Benefit > Risk (8)
Other (4)
Step 4: Synthesize and Present the Story

The data tells a story. Your job is to narrate it clearly and concisely for the P&T committee. You don’t just present numbers; you present insights and recommendations.

The P&T Committee Reporting Script

You: “Good morning. I’m presenting the 30-day performance review of the new levofloxacin renal dosing alert. Overall, the rule is performing moderately well but has clear opportunities for improvement.”

“The rule fired on 12% of all levofloxacin orders, which is in line with our expectations for the prevalence of renal impairment in our patient population. The overall acceptance rate for the guidance is 73%. While this is a good start and represents 110 potentially averted dosing errors, our goal is to get this above 85%.”

“The key insights come from our analysis of the 40 overrides. The data is very clear. The single largest reason for overrides—accounting for 45% of them—is in patients with Acute Kidney Injury. Clinicians are correctly noting that the Cockcroft-Gault calculation is not valid in these patients and are deferring to manual dosing or pharmacy consults. This is clinically appropriate behavior, but it’s causing unnecessary alert noise.”

Therefore, the informatics pharmacy team recommends an immediate enhancement to the rule logic: we will add an exclusion criterion to suppress this alert for any patient whose serum creatinine has risen by more than 0.3 mg/dL in the preceding 48 hours. By silencing the alert in these predictable, clinically valid override scenarios, we project we can reduce the alert burden by nearly half and significantly increase the acceptance rate for the alerts that do fire. We will bring the 30-day post-enhancement data back to this committee for review.”

This data-driven approach transforms you from a simple rule-builder into a strategic, evidence-based manager of the hospital’s clinical logic. You have used the audit log to identify a problem, diagnose its root cause, and propose a specific, targeted solution. This is the continuous quality improvement cycle that defines a high-performing informatics team.