CPIA Module 13, Section 3: Algorithmic Detection and Exception Reporting
MODULE 13: DIVERSION PREVENTION & SAFETY ANALYTICS

Section 3: Algorithmic Detection and Exception Reporting

Moving beyond simple reports: A deep dive into the statistical models and machine learning that power modern diversion intelligence.

SECTION 13.3

Algorithmic Detection and Exception Reporting

Translating clinical intuition into mathematical precision to find the signal in the noise.

13.3.1 The “Why”: The Limits of Human Intuition at Scale

In your pharmacy practice, you developed a powerful, finely tuned intuition for detecting risk. You could sense when a prescription felt “off,” when a patient’s story didn’t quite add up, or when a colleague’s behavior seemed unusual. This clinical intuition is an invaluable skill, born from experience and pattern recognition. However, in the context of a modern healthcare system that generates millions of medication transactions every week, this human-scale intuition has a critical limitation: it cannot see the entire picture at once. You can scrutinize one prescription, one ADC log, or one clinician’s behavior, but you cannot possibly process the data for 5,000 employees simultaneously.

This is the fundamental “why” behind algorithmic detection. A simple, static report—like a list of all ADC overrides—is the digital equivalent of looking at a single piece of the puzzle. It’s informative, but lacks context. Did the nurse with 10 overrides work in the ED during a mass casualty incident, or on a quiet med-surg floor? Was her override total high for just one day, or has it been trending upward for months? Answering these questions requires comparing that single data point to thousands of others, a task that is impossible for a human to do efficiently or objectively. Algorithms are, quite simply, clinical intuition encoded into mathematics and executed at superhuman speed and scale.

The purpose of a diversion analytics algorithm is not to replace your professional judgment, but to augment and empower it. These complex statistical models and machine learning systems are designed to sift through the overwhelming sea of “normal” transactions to find the statistically significant deviations—the subtle footprints of diversion that are invisible to manual review. They find the faint signals in the deafening noise. The output of these systems is not a verdict of guilt; it is an “exception report,” a highly qualified, data-driven suggestion that a particular pattern of behavior warrants a closer look by a clinical expert. Your role as a pharmacy informatics analyst is to become the master of this powerful new toolkit: to understand how these algorithms work, to tune their parameters, to validate their findings, and, most importantly, to apply your pharmacist’s wisdom to interpret their results.

Retail Pharmacist Analogy: The Advanced DUR as a Diversion Algorithm

Think about the evolution of the Drug Utilization Review (DUR) systems you’ve used. The first-generation systems were like simple, static reports. They might flag a single issue: “HIGH DOSE” for a lisinopril 40mg prescription. This is useful, but basic.

Now, consider a modern, sophisticated DUR system. It doesn’t just look at one thing; it runs a complex algorithm that considers multiple factors simultaneously:

  1. It sees the high dose of lisinopril.
  2. It cross-references this with the patient’s profile and sees a new prescription for spironolactone (a drug interaction risk for hyperkalemia).
  3. It then pulls in lab data and sees the patient’s most recent serum potassium is 5.2 mEq/L and their eGFR is 25 mL/min.
  4. Finally, it checks the dispensing history and notes that the patient has received this combination before and was hospitalized for hyperkalemia six months ago.

The system doesn’t just flash a mild warning. It triggers a hard stop, a high-severity alert that says: “CRITICAL RISK: HYPERKALEMIA. This patient has a history of hospitalization with this combination, elevated potassium, and severe renal impairment.”

This multi-factorial, context-aware analysis is precisely how a modern diversion algorithm works. It doesn’t just flag a nurse for one high-risk behavior. It identifies the nurse who has a high waste percentage, AND who consistently works with a “waste buddy,” AND whose delta time between withdrawal and administration is longer than their peers, AND whose activity spikes at the end of their shift. Like the advanced DUR, the diversion algorithm integrates multiple, disparate data points to create a single, high-fidelity signal of risk. Your job is to interpret that signal.

13.3.2 The Language of Data: Foundational Statistical Concepts for Analysts

To wield these algorithmic tools effectively, you must first speak their language. The language of data analytics is statistics. While you don’t need to be a professional statistician, a firm grasp of several core concepts is non-negotiable. These concepts are the building blocks of every report, every risk score, and every alert the system generates. We will frame them in the context of diversion monitoring.

Measures of Central Tendency: Defining “Normal”

These are the simplest, most fundamental metrics. They tell us what the “typical” or “average” behavior looks like for a given group.

  • Mean (Average): The sum of all values divided by the number of values. Example: If the mean number of hydromorphone withdrawals per shift in the ICU is 4.5, this is our initial benchmark for “normal.”
  • Median: The middle value in a sorted list of numbers. It is less affected by extreme outliers than the mean. Example: If one nurse had a crazy shift and made 30 withdrawals, the mean might jump to 6.2, but the median might only shift from 4 to 5, giving a more realistic picture of the “typical” nurse’s activity.
  • Mode: The value that appears most frequently. Example: If the most common amount of morphine wasted is 2mg, this is the mode. A nurse who frequently wastes 4mg would be deviating from this mode.

Standard Deviation (SD): The Most Powerful Metric in Diversion Analytics

If you master one statistical concept, make it this one. Standard deviation is a measure of dispersion or variability. In simple terms, it tells you how spread out the data points are from the mean (average). A low SD means everyone’s behavior is very similar and clustered around the average. A high SD means there is a wide range of behaviors.

The Bell Curve and the 68-95-99.7 Rule

In many clinical datasets, user behavior follows a “normal distribution,” which looks like a bell curve. Standard deviation gives us a powerful way to understand this curve:

  • Approximately 68% of all users will fall within 1 standard deviation of the mean. This is the “normal” range.
  • Approximately 95% of all users will fall within 2 standard deviations of the mean. Users outside this range are becoming statistically unusual.
  • Approximately 99.7% of all users will fall within 3 standard deviations of the mean. A user who is more than 3 SDs from the mean is a true statistical outlier and a high-priority person of interest.

Practical Application: The diversion analytics system calculates the mean and SD for a key metric (e.g., fentanyl usage per anesthesia hour) for all anesthesiologists. Dr. Smith’s average usage is 2.8 SDs above the mean. The algorithm immediately flags this. This is not a subjective judgment; it is a mathematical fact that her practice pattern is highly unusual compared to her peers. Your job is to investigate why.

Percentiles: Ranking Users Against Their Peers

Percentiles are another way to express how a user compares to a group. A percentile rank indicates the percentage of peers that the individual scored higher than.
Example: The analytics report shows that Nurse Johnson is in the 98th percentile for the total volume of morphine wasted. This means her total waste volume is greater than that of 98% of her peers in the same peer group. This is a very clear and intuitive way to communicate risk to a non-technical manager.

13.3.3 A Masterclass on Diversion Detection Algorithms

Armed with a statistical vocabulary, we can now dissect the specific algorithms that form the engine of a modern diversion analytics platform. These are not mutually exclusive; a sophisticated system uses a combination of these models to create a holistic view of user behavior and risk.

Masterclass Table: The Core Diversion Analytics Models
Algorithmic Model Core Question It Answers Example of a “High-Risk” Finding Analyst’s Role & Key Considerations
Peer-to-Peer Benchmarking “How does this user’s activity compare to the activity of their colleagues in the same role, on the same unit, during the same period?” A nurse’s ADC override rate is 3.1 standard deviations above the mean for all other nurses on her unit over the past 90 days. Define the peer group correctly! Comparing an ICU nurse to a Med-Surg nurse is an apples-to-oranges comparison that will generate useless alerts. You must work with IT to create granular, clinically relevant peer groups.
Time-Series Analysis “Is this user’s behavior changing over time? Is their risk increasing?” A clinician’s monthly waste percentage for hydromorphone was stable at 15% for six months, but has trended steadily upward to 45% over the last three months. Look for trends, not just single events. A sustained increase in risk is far more concerning than a single anomalous day. Is the trend accelerating? This is a critical question.
Multi-Factorial Risk Scoring “Considering dozens of different behaviors at once, what is this user’s overall, composite risk score compared to everyone else?” A user has a composite risk score of 99.7, driven by being in the 95th percentile for waste, the 98th for overrides, and the 92nd for PRN administrations at the end of a shift. Understand how the score is weighted. Does an override for fentanyl carry more weight than one for saline? You must validate that the algorithm’s weighting scheme aligns with your clinical judgment of what constitutes the highest risk.
Co-occurrence Analysis (“Waste Buddy” Detection) “Are there pairs of users whose activities are suspiciously linked?” Nurse A and Nurse B witness waste for each other 85% of the time, while the average for other pairs on the unit is less than 10%. Both nurses also have high waste percentages. This can be a signal of collusion or a “drive-by” witness situation. The next step is to interview both parties separately and review security camera footage if available.
Machine Learning: Outlier Detection “Forgetting all pre-defined rules, what behavior is simply ‘weird’ and doesn’t fit the pattern of normal activity for the entire hospital?” The algorithm flags a user who consistently withdraws a specific combination of drugs (e.g., morphine, promethazine, and ondansetron) together, a pattern not seen among thousands of other users. This is where AI shines. It can find patterns you didn’t even know to look for. Your job is to take the machine’s finding of “weird” and apply your clinical expertise to determine if it’s “weird and dangerous” or “weird but benign.”
Deep Dive: How a Composite Risk Score is Calculated

A composite risk score is not magic; it’s just math. Imagine a simplified scoring system where we look at three metrics for a nurse, comparing them to their peers:

  • Metric 1: Waste Percentage. Let’s say we assign 10 points for every standard deviation above the mean. Nurse Jane is at +2 SDs. (Score = 2 * 10 = 20 points).
  • Metric 2: ADC Overrides. This is higher risk, so we assign 20 points per SD. Nurse Jane is at +1.5 SDs. (Score = 1.5 * 20 = 30 points).
  • Metric 3: End-of-Shift PRNs. This is also a major red flag, so we assign 25 points per SD. Nurse Jane is at +1 SD. (Score = 1 * 25 = 25 points).

Total Composite Score for Nurse Jane = 20 + 30 + 25 = 75.

The system performs this calculation for every user, every day. It then ranks all users by this score. The individuals at the very top of that ranked list are the ones who appear in your daily exception report. Your job, as the analyst, is to work with your vendor or IT team to review and adjust the “weights” (the points assigned to each metric) to ensure they accurately reflect your institution’s specific risks and priorities.

13.3.4 From Signal to Action: Triaging and Interpreting Exception Reports

The daily output of your analytics platform is the exception report. This is your primary worklist. It is not a list of guilty parties; it is a prioritized list of statistical anomalies that require expert human review. The single most common failure point for a new diversion program is “alert fatigue”—generating so many low-quality alerts that the analyst becomes overwhelmed and starts ignoring them. Therefore, a formal, structured triage process is essential to focus your valuable time on the signals that matter most.

The Triage Funnel: A Structured Approach to Alert Review

Think of your daily review as a funnel. You start with a broad list of automated alerts and systematically apply your clinical judgment to filter them down to the small number of cases that require a full-blown investigation.

The Exception Report Triage Workflow
LEVEL 1
Initial Automated Report

The analytics system generates a daily list of all users who exceeded a predefined statistical threshold on any monitored metric (e.g., top 5% of users by composite risk score, any user >3 SDs above the mean on waste).

Action: Automated Generation. (e.g., 50 alerts/day)

LEVEL 2
Analyst’s Preliminary Review

You, the analyst, spend 1-2 hours reviewing the Level 1 list. You apply your clinical context to filter out the “noise.” You quickly dismiss alerts that have obvious, legitimate explanations.

  • “This nurse’s usage is high, but her patient acuity is off the charts. Dismiss.”
  • “This anesthesiologist’s fentanyl use looks high, but he only does cardiac cases. Benchmark against other cardiac anesthesiologists. Dismiss.”
  • “This alert is for a single override during a code blue. Legitimate. Dismiss.”

Action: Apply Clinical Context. (e.g., List reduced to 5-7 alerts/day)

LEVEL 3
Focused Chart Audit

For the remaining alerts, you conduct a focused chart audit. You review the MARs, provider orders, and nursing notes for a sample of the high-risk transactions. You are looking for a plausible clinical narrative that explains the data. Does the documentation support the high number of PRN withdrawals? Are the waste amounts appropriate for the orders?

Action: Deep-Dive Investigation. (e.g., 1-2 alerts/day proceed to this stage)

LEVEL 4
Escalation to Committee

If the chart audit does not resolve the anomaly and, in fact, strengthens the suspicion of diversion (e.g., you find documented waste of 4mg morphine for a 2mg order), you escalate. You compile your findings—the objective data from the analytics platform and the results of your chart review—into a formal case file and present it to the hospital’s multidisciplinary Diversion Committee for further action.

Action: Formal Escalation. (e.g., 1-2 cases/week)