CPIA Module 17, Section 5: Measuring Training Effectiveness and Adoption
MODULE 17: TRAINING & END-USER SUPPORT

Section 17.5: Measuring Training Effectiveness and Adoption

Move from anecdotal feedback to data-driven proof of value. Learn how to use system adoption metrics, competency assessments, and user surveys to measure the true impact of your training and prove the ROI of your educational programs.

SECTION 17.5

Measuring Training Effectiveness and Adoption

From “Check-the-Box” to Clinical Impact: The Science of Proving Your Value.

17.5.1 The “Why”: The Most Expensive Training is the One That Doesn’t Work

In the world of hospital operations, every expenditure, every program, and every initiative is eventually held up to the light and asked a simple, brutal question: “What value did this provide?” For too long, corporate and clinical training has been allowed to exist in a protected bubble, judged not on its results, but on its activity. We proudly report metrics like “number of users trained” or “hours of instruction delivered.” But these are vanity metrics. They measure effort, not effect. They are the equivalent of a pharmacy reporting “number of prescriptions filled” without any regard to the clinical outcomes of the patients who received them. It is a hollow, and ultimately indefensible, definition of success.

The most expensive training program is not the one with the highest budget; it is the one that fails to change behavior and improve performance. A failed training program is a black hole of resources. It consumes the salary dollars of the trainers, it pulls clinicians away from patient care, and, most insidiously, it creates a false sense of security that a problem has been solved when, in fact, it has not. When users leave a training session and immediately revert to their old, inefficient, or unsafe workflows, the organization has not only wasted money, it has squandered a critical opportunity for improvement and potentially endangered patients.

As a Pharmacy Informatics Analyst, you must adopt the mindset of a clinical scientist and a business analyst. You must move beyond the comfortable world of anecdotal feedback—”I think the class went well”—and into the rigorous, data-driven world of evidence-based evaluation. The process of measuring training effectiveness and system adoption is not an optional, “nice-to-have” appendix to your training plan. It is a strategic imperative. It is the mechanism by which you:

  • Justify Your Existence: In an environment of tight budgets, the ability to demonstrate a clear return on investment (ROI) for your educational programs is the key to securing future funding and resources.
  • Drive Continuous Improvement: Measurement exposes what’s working and what isn’t. Data might reveal that your training is effective for pharmacists but failing for technicians, prompting a targeted redesign of their curriculum.
  • Identify At-Risk Users: Adoption metrics can pinpoint individuals or entire departments that are struggling to use the system correctly, allowing you to provide proactive, targeted support before errors occur.
  • Connect Your Work to Patient Care: This is the ultimate “why.” Rigorous measurement allows you to draw a credible line between the training session you led on the new sepsis order set and a tangible improvement in the hospital’s sepsis bundle compliance rate. It proves that your work is not just about technology; it is about saving lives.

This section is your guide to becoming a data-driven educational leader. We will provide you with a powerful, multi-level framework for evaluating your programs. You will learn how to gather and interpret data from user surveys, competency assessments, and, most importantly, from the EHR itself. You will learn to speak the language of metrics, to build a compelling case for your value, and to transform your training department from a perceived cost center into a proven engine of clinical and operational excellence.

Retail Pharmacist Analogy: Evidence-Based MTM vs. “Hope as a Strategy”

As a modern pharmacist, you are an expert in evidence-based practice. Your entire clinical decision-making process is built on a foundation of measurable data. Imagine two different approaches to managing a patient with uncontrolled Type 2 Diabetes.

Approach A (Anecdotal & Ineffective): You have a brief, five-minute conversation with the patient, Mr. Jones, during which you tell him to “eat better and exercise more.” You don’t check his blood glucose logs. You don’t ask about specific barriers. You don’t schedule a follow-up. When your manager asks how the MTM program is going, you say, “Great! I talked to Mr. Jones, and he seemed to get it.” Your metric of success is “a conversation was had.” This is hope as a strategy. It’s a low-value activity with no measurable impact on the patient’s health.

Approach B (Data-Driven & Effective): You conduct a comprehensive medication therapy management (MTM) session. This is a structured, evidence-based intervention.

  • Level 1 (Reaction): At the end of the session, you ask Mr. Jones, “On a scale of 1-5, how confident do you feel about managing your diabetes now?” You gauge his immediate reaction and confidence.
  • Level 2 (Learning): You use the “teach-back” method. “To make sure I was clear, can you show me how you’re going to use your new glucometer?” You directly assess his knowledge and skill.
  • Level 3 (Behavior): You schedule a follow-up call in one week, not to re-teach, but to review his blood glucose logs. You are measuring his actual behavior—is he testing as you discussed? Are the numbers improving?
  • Level 4 (Results): Three months later, you check his lab results. His HbA1c has decreased from 9.8% to 7.5%. You have just demonstrated a tangible, measurable clinical outcome. You have the data to prove that your intervention created real value, preventing the long-term complications of diabetes and reducing costs for the healthcare system.

Measuring the effectiveness of your informatics training requires the exact same discipline and rigor. A “smile sheet” (Level 1) is a good start, but it’s not enough. You must follow through to assess knowledge with competency checks (Level 2), measure actual on-the-job behavior with system adoption metrics (Level 3), and ultimately, connect that behavior to meaningful organizational results like improved patient safety or increased efficiency (Level 4). You are already an expert in evidence-based practice for patients; now you will become an expert in evidence-based education for your colleagues.

17.5.2 The Kirkpatrick Model: A Four-Level Framework for Proving Value

The most respected and widely used framework for evaluating the effectiveness of training is the Kirkpatrick Model. Developed by Dr. Don Kirkpatrick in the 1950s, its four-level structure provides a powerful and logical roadmap for moving from simple satisfaction scores to demonstrating true organizational impact. Think of it as a pyramid: each level builds upon the one below it, and as you move up the pyramid, the data becomes more valuable, but also more difficult to collect.

As an informatics analyst, adopting this framework will bring discipline and structure to your evaluation strategy. It will provide you with a common language to discuss the value of training with leadership and will ensure that you are looking at the complete picture of your program’s impact.

The Kirkpatrick Model of Training Evaluation

LEVEL 4

Results

To what degree did targeted organizational outcomes occur as a result of the training and the subsequent change in behavior?

Key Question: Did we impact the mission?
How to Measure: Return on Investment (ROI), medication error rates, length of stay, bundle compliance rates, charge capture.
LEVEL 3

Behavior

To what degree are participants applying what they learned back on the job?

Key Question: Are they doing it?
How to Measure: System adoption metrics from EHR reports, direct observation, supervisor feedback, quality audits.
LEVEL 2

Learning

To what degree did participants acquire the intended knowledge, skills, and confidence?

Key Question: Did they learn it?
How to Measure: Post-training quizzes, skills-based competency assessments, simulation performance, teach-back method.
LEVEL 1

Reaction

To what degree did participants find the training favorable, engaging, and relevant to their jobs?

Key Question: Did they like it?
How to Measure: Post-session feedback surveys (“smile sheets”), informal comments, engagement levels during class.

17.5.3 Deep Dive: Measuring Reaction & Learning (Levels 1 & 2)

Levels 1 and 2 are the foundational layers of your evaluation. They are the easiest to measure and provide immediate feedback on the quality of your training event and curriculum. While they don’t prove on-the-job impact, they are essential leading indicators. If users hate the training (Level 1) and don’t learn anything (Level 2), it is almost impossible for behavior to change (Level 3) or for results to be achieved (Level 4).

Mastering Level 1: Beyond the “Smile Sheet”

Level 1 evaluation, the “smile sheet,” is often dismissed as a superficial popularity contest. And if designed poorly, it is. A survey that only asks, “Did you enjoy this class?” provides no actionable data. However, a well-designed reaction survey can provide valuable insights into the learner’s experience, engagement, and perceived relevance of the content. The goal is to measure the right things.

Designing an Actionable Level 1 Survey

Instead of vague, subjective questions, focus on specific, measurable aspects of the training experience. Use a 5-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree) for consistent data that can be averaged and trended over time.

  • Relevance:
    • “The content of this training was directly relevant to my role and daily work.”
  • Clarity of Objectives:
    • “I understood the learning objectives for this session from the beginning.”
  • Instructor Effectiveness:
    • “The instructor was knowledgeable about the subject matter.”
    • “The instructor presented the information in a clear and understandable way.”
  • Materials & Environment:
    • “The training materials (e.g., handouts, QRG) were helpful and easy to follow.”
    • “The training environment was conducive to learning.”
  • Confidence & Intent (A Bridge to Level 3):
    • “As a result of this training, I feel confident in my ability to perform these workflows.”
    • “I intend to use the skills and knowledge from this training in my daily work.”
  • Qualitative Feedback (The Most Valuable Part): Always include open-ended questions to capture insights you didn’t think to ask about.
    • “What was the most valuable part of this training session?”
    • “What is one thing you would change to improve this training session?”

Mastering Level 2: Assessing Knowledge and Skill Transfer

Level 2 is where we ask, “Did they get it?” This is the first true test of learning. It moves beyond perception to assess the actual acquisition of knowledge and skills. A solid Level 2 assessment is your proof that the educational content itself was effective. There are two primary methods for this: knowledge-based assessments and skills-based assessments.

Knowledge-Based Assessments: These are typically multiple-choice quizzes or short-answer tests that measure the learner’s understanding of facts, concepts, and principles (the “why”). These are excellent for checking understanding of policies, alert meanings, or the rationale behind a workflow.

Skills-Based Assessments: This is the more powerful method for informatics training. It measures a learner’s ability to actually *perform* a task in the system (the “how”). This is not about what they know; it’s about what they can *do*. The gold standard for a skills-based assessment is the Competency Checklist.

A competency checklist is an observational tool used by an instructor or evaluator to systematically assess a learner’s performance against a predefined set of critical steps and standards. It is an objective, evidence-based way to verify that a user can safely and correctly perform a high-risk workflow.

Masterclass Breakdown: Competency Checklist for Sterile Compounding Documentation
Learner: __________________ Evaluator: __________________ Date: ______________
Scenario: A verified STAT order for Vancomycin 1.5g in 250mL NS is in the IV queue. Please prepare this product using the IV compounding module.
Critical Step Standard of Performance Met / Not Met
1. Initiation Correctly selects the STAT Vancomycin order from the IV preparation queue. [ ] Met   [ ] Not Met
2. Barcode Scanning Scans the barcode on the 250mL NS base solution. Scans the barcode on three separate 500mg Vancomycin vials. [ ] Met   [ ] Not Met
3. Image Capture Takes a clear, readable photograph of the ingredients (3 vials + bag) grouped together as prompted by the system. [ ] Met   [ ] Not Met
4. Lot & Expiration Correctly enters the lot number and expiration date for all three vials and the base solution. [ ] Met   [ ] Not Met
5. Final Label Takes a clear, readable photograph of the final prepared product with the label correctly affixed. [ ] Met   [ ] Not Met
6. Documentation Successfully completes the documentation, changing the order status to “Prepared” for the pharmacist to verify. [ ] Met   [ ] Not Met
Overall Result: Pass / Needs Remediation

17.5.4 Deep Dive: Measuring Behavior & Adoption (Level 3)

This is the moment of truth. Level 3 evaluation moves out of the classroom and into the clinical environment. It answers the most critical question: Are people actually using the system the way we trained them to? This is arguably the most important level for an informatics analyst because you are uniquely positioned to gather this data directly from the source: the Electronic Health Record. While observation and surveys have their place, the “digital exhaust” of user activity within the EHR provides a rich, objective, and unbiased source of truth about on-the-job behavior.

Measuring Level 3 is about defining Key Performance Indicators (KPIs) related to system adoption and then building reports to track them over time. This is where your analytical skills come to the forefront. You will work with reporting teams or use tools like Epic’s Reporting Workbench to extract and analyze user activity data. Your goal is to tell a story with this data, identifying trends, celebrating successes, and pinpointing areas that need intervention.

Masterclass Table: Pharmacy-Specific Level 3 Adoption Metrics
Metric Category Specific Metric Why It Matters (What It Tells You) How to Measure It (Conceptual)
Efficiency & Workflow Order Verification Turnaround Time (TAT) Measures the time from when an order is placed by a provider to when it is verified by a pharmacist. A decreasing TAT can indicate growing proficiency with the verification queue and workflow. Report on the median time difference between `Order Placement Timestamp` and `Order Verification Timestamp`, filterable by order priority (STAT, Routine) and pharmacy location.
Use of Keyboard Shortcuts / Quick Actions Indicates that users are moving beyond basic navigation and adopting “power user” features that increase speed and reduce clicks. Some EHRs log the specific actions users take. A report could count the usage of specific function IDs tied to shortcuts (e.g., ALT+A for Accept).
Average Time in Med Rec Measures how long a user has a patient’s medication reconciliation window open. A decrease over time suggests increased comfort and efficiency with this complex process. Report on the time difference between the `Med Rec Window Open` and `Med Rec Window Close` audit trail events.
Quality & Safety Order Set Utilization Rate This is a crucial metric. It measures the percentage of targeted orders (e.g., for sepsis) that are placed using the standardized, evidence-based order set versus being entered as individual, free-text orders. High utilization is a huge win for safety and standardization. Numerator: # of Sepsis admissions with the Sepsis Order Set used. Denominator: Total # of Sepsis admissions. Report this as a percentage over time.
Clinical Decision Support (CDS) Alert Override Rate Measures how often users override specific high-risk alerts (e.g., severe drug-drug interactions). A very high override rate may indicate “alert fatigue” or a poorly tuned alert that needs review. A very low rate might indicate the alert is working well. Report on the override rate for specific alert types. Filterable by user, department, and specific drug pair. Provide the “override reason” data.
Use of “Avoid Abbreviation” Smart-text If you’ve built tools to help prescribers use safe medication terminology (e.g., converting “QD” to “daily”), you can measure their use. Count the number of times a specific smart-text phrase or macro is triggered in medication orders or notes.
Feature Adoption TPN Calculator Usage If you’ve rolled out a new, integrated TPN calculator, you can measure its adoption versus the old method (e.g., Excel spreadsheets or paper forms). Count the number of TPN orders that were generated using the integrated calculator functionality.
Custom Report Views If you’ve trained users on how to customize their view of the verification queue or other reports, you can often track how many have actually done so. Query the user settings database to count the number of users who have a saved, personalized report view.
The Dangers of Misinterpreting Metrics

Data provides insights, not absolute truths. You must be a critical thinker and avoid common pitfalls when interpreting Level 3 metrics. Always ask “why” before jumping to a conclusion.

  • Correlation is Not Causation: The order set utilization rate went up after your training. Did the training cause it? Or did a new hospital policy mandating its use go into effect at the same time? You must consider confounding variables.
  • Beware of Perverse Incentives (Goodhart’s Law): If you start rewarding pharmacists for the lowest “Turnaround Time,” they may be incentivized to verify orders quickly but less carefully. A metric, once it becomes a target for performance evaluation, can sometimes be gamed.
  • Context is Everything: The override rate for a severe DDI alert is 95%. Is this a failure? Maybe not. If the alert is for Warfarin and Bactrim, and the clinical plan is to monitor the INR closely, then overriding the alert is the correct clinical judgment. You must analyze the “why” (the override reasons) before judging the “what” (the rate).

17.5.5 Deep Dive: Measuring Results & ROI (Level 4)

This is the pinnacle of the Kirkpatrick pyramid. Level 4 answers the ultimate question from leadership: “So what?” So you trained everyone, and they learned the skill, and they are even using the system correctly. Did any of it make a difference to the hospital’s core mission: improving patient care and maintaining financial viability? Measuring Level 4 is about connecting your Level 3 behavior data to tangible, high-level organizational results. This is often the most challenging level to measure, as it requires collaboration with other departments (like Quality, Finance, and Risk Management) and a careful approach to demonstrating causality. But it is also the most powerful.

Calculating Return on Investment (ROI)

ROI is the classic business metric for evaluating an investment. It compares the financial benefit of a project to its cost. While some benefits of training are intangible, many can be quantified and translated into dollars. As an analyst, being able to perform a basic ROI calculation for your training programs is a powerful skill that demonstrates your business acumen.

$$ \text{ROI (%)} = \frac{(\text{Financial Benefit} – \text{Cost of Training})}{\text{Cost of Training}} \times 100 $$

Masterclass Breakdown: Simple ROI Calculation for a Training Initiative

Scenario: You designed and delivered a 2-hour training session for 20 IV room technicians on a new, more efficient workflow within the IV compounding module. Your goal was to reduce the time it takes to prepare a batch of medications.

1. Calculate the Cost of Training

  • Your Time (Development): 10 hours to design materials and scenarios @ $50/hr = $500
  • Your Time (Delivery): 2 hours to teach the class @ $50/hr = $100
  • Technician Time (Attending): 20 techs * 2 hours * $25/hr (average loaded wage) = $1,000
  • Materials: Printing handouts = $50
Total Training Cost = $1,650

2. Calculate the Financial Benefit

  • Baseline Data (Level 3): Before training, your EHR data showed the average time to document a 10-item batch was 15 minutes.
  • Post-Training Data (Level 3): One month after training, the average time for the same task is now 12 minutes. This is a savings of 3 minutes per batch.
  • Quantify the Volume: The IV room prepares approximately 50 batches per day.
  • Calculate Time Saved: 3 min/batch * 50 batches/day = 150 minutes/day = 2.5 hours/day.
  • Monetize the Time: 2.5 hours/day * $25/hr (tech wage) = $62.50 saved per day.
  • Annualize the Benefit: $62.50/day * 260 working days/year = $16,250
Annual Financial Benefit = $16,250

3. Calculate the Final ROI

$$ \text{ROI (%)} = \frac{($16,250 – $1,650)}{$1,650} \times 100 = \frac{$14,600}{$1,650} \times 100 = 885% $$

Conclusion: For every $1 invested in this training program, the organization gained back $8.85 in recovered productivity within the first year.

Connecting to Clinical Outcomes

While ROI is powerful, the ultimate goal of healthcare is to improve patient outcomes. Demonstrating a link between your training and a clinical metric is the highest form of value you can show. This requires a strong partnership with your organization’s Quality and Safety department, as they are typically the owners of this data.

The logic is a causal chain: Your Training (Level 2) → Leads to Correct System Use (Level 3) → Which Drives a Key Clinical Result (Level 4).

Examples:

  • Initiative: You train all ED providers and nurses on the new “Code Sepsis” order set and workflow.
  • Level 3 Metric: Order set utilization rate for patients with a sepsis diagnosis increases from 25% to 90%.
  • Level 4 Metric (from Quality Dept): The hospital’s compliance with the SEP-1 “Hour-1 Bundle” (a national quality metric) improves from 60% to 85% in the quarter following the training.
  • Initiative: You conduct widespread training on the new, mandatory “High-Risk Anticoagulant” patient education tools and documentation workflow in the EHR.
  • Level 3 Metric: 95% of patients discharged on warfarin or a DOAC now have the standardized education documented in their chart.
  • Level 4 Metric (from Risk Management): 30-day readmission rates for bleeding events among patients discharged on anticoagulants decrease by 15% over the next six months.