Section 2: PDSA Cycles and Continuous Improvement Models
Master the foundational methodology of quality improvement: the Plan-Do-Study-Act (PDSA) cycle. Learn how to use this iterative, scientific method to test changes on a small scale before deploying them system-wide.
PDSA Cycles and Continuous Improvement Models
From Data to Action: The Scientific Method of Improvement.
12.2.1 The “Why”: From Identifying Problems to Solving Them Scientifically
In the previous section, we mastered the art and science of selecting Key Performance Indicators. You now know how to identify and measure the vital signs of your pharmacy’s operations and its impact on patient safety. You can create a dashboard that tells your leadership, with data-driven clarity, that the median STAT medication turnaround time is 28 minutes, exceeding the target of 20 minutes. You have successfully answered the question, “Where is the problem?” This is a monumental achievement, but it is only the first step. Knowing you have a problem and knowing how to solve it are two entirely different disciplines.
The traditional, and often disastrous, approach to problem-solving in large organizations is what is known as the “big bang” or “go-live” model. A committee is formed, a solution is debated for months, a massive project plan is created, and after a year of development, a new, sweeping change is implemented all at once across the entire hospital. This approach is incredibly risky. If the solution has unforeseen flaws—and it almost always does—it causes widespread disruption, frustrates staff, compromises patient safety, and often has to be rolled back in a firestorm of emails and emergency meetings. It is the equivalent of a drug company skipping Phase I, II, and III trials and going straight to a nationwide launch of a new medication. No pharmacist would ever endorse such a reckless approach to clinical science, yet it happens every day in operations and informatics.
The Plan-Do-Study-Act (PDSA) cycle is the antidote to this chaos. It is the application of the scientific method to quality improvement. It provides a structured, rigorous, and iterative framework for testing changes on a very small scale, studying the results, and learning from each test before attempting a wider implementation. It is a philosophy of humility; it assumes that our first proposed solution may not be the best one and that the only way to know is to test it in the real world, with real staff, on a scale so small that failure is not a catastrophe, but simply a learning opportunity. This is how you move from being a data reporter who identifies problems to a true informatics leader who helps solve them. You will learn to guide teams in using data not just to look backward at performance, but to look forward and test hypotheses that can create real, sustainable change.
Retail Pharmacist Analogy: The Workflow Experiment
Imagine your retail pharmacy is consistently missing its target for prescription wait times (a KPI). Customers are complaining, and your team is stressed. The “big bang” approach would be for a corporate team, who hasn’t worked in your pharmacy in years, to redesign the entire workflow on paper and mandate that you implement it on Monday. The risks are enormous: the new process might not work with your pharmacy’s physical layout, it could create new bottlenecks, and it would certainly cause chaos during the first week.
Now, consider the PDSA approach. You, as the pharmacy manager, suspect that a lot of time is wasted by technicians walking back and forth between the filling counter and the label printer. You have a theory: moving the printer closer will save time.
- PLAN: You decide to test this theory. Your plan is not to permanently move the printer. It is to, for one hour on Tuesday afternoon (a non-peak time), use an extension cord to temporarily move the printer right next to the filling station. You predict this will reduce the average time to fill a prescription by 30 seconds. You will track the fill times for all prescriptions during that hour and ask the two technicians working at the station for their feedback. Your test is small, specific, and measurable.
- DO: On Tuesday, you run the experiment exactly as planned. You inform the two technicians what you’re testing and why. You stand by with a stopwatch (or use system timestamps) and record the fill times. You observe the workflow and note that while the walking is eliminated, the technicians now sometimes bump into each other.
- STUDY: After the hour is up, you analyze the data. The average fill time did indeed decrease, but only by 15 seconds, not the 30 you predicted. You talk to the technicians. They loved not having to walk but said the cramped space was awkward. One suggests moving the printer to a small cart so it can be positioned just right.
- ACT: Your hypothesis was partially correct, but the initial solution wasn’t perfect. You decide to Adapt the plan. You will abandon the “extension cord on the counter” idea. For your next cycle, you will Plan to acquire a small rolling cart, Do a two-hour test with the printer on the cart, and Study the results.
You did not disrupt the entire pharmacy. You did not spend money on a permanent change. You did not rely on theory alone. You used a small-scale, scientific approach to test an idea, learn from its real-world application, and refine your next step based on evidence. You ran a PDSA cycle. This is the exact methodology you will use to improve complex clinical workflows in the hospital.
12.2.2 A Forensic Autopsy of the PDSA Cycle
The PDSA cycle looks simple, but its power lies in the rigor and discipline applied to each of its four stages. High-performing teams don’t just “do PDSAs”; they execute each phase with the precision of a clinical trial. As an informatics analyst, you are the data integrity officer for these trials. You help ensure the plan is sound, the data is collected properly, the analysis is honest, and the actions are data-driven. Let’s dissect each phase.
The Iterative Cycle of Improvement
Plan
Do
Study
Act
P
Plan the Test
State the objective, make predictions, and develop a plan to test the change. Define what you will measure and how.
D
Do: Carry out the test
Execute the test on a small scale. Document problems and unexpected observations. Collect the data you planned to collect.
S
Study the Results
Analyze the data. Compare results to your predictions. Summarize what was learned (both expected and unexpected).
A
Act on the Results
Based on what you learned, decide your next step: Adopt the change, Adapt it with modifications, or Abandon the approach. Prepare for your next cycle.
The “Plan” Phase: Architecting a Successful Test
This is the most important phase of the cycle. A poorly planned test will yield meaningless data, wasting time and resources. Your role as an analyst is to bring structure and data-centric thinking to this phase. The team provides the subject matter expertise; you provide the methodology.
The PDSA Planning Worksheet: Your Contract for the Test
A best practice is to formalize the Plan phase using a standardized worksheet. This document serves as the “protocol” for your operational experiment. It forces the team to think through every detail before starting, ensuring alignment and a clear data plan.
Masterclass Table: The PDSA Planning Worksheet Components
| Worksheet Section | Guiding Questions | Analyst’s Contribution & Example (Improving STAT TAT) |
|---|---|---|
| 1. Aim Statement | What is the SMART goal we are trying to achieve? (Reference Section 12.1.3) | Analyst: Helps the team convert a vague goal into a SMART one.
Example: “Decrease the median STAT TAT for ED orders from 25 min to < 20 min by end of Q3." |
| 2. Description of the Change | What specific change are we testing in this cycle? Be precise. | Analyst: Pushes for specificity. “Improve communication” is not a change.
Example: “For one day, a dedicated pharmacy technician will be assigned to the ED to triage new STAT orders and coordinate with the central pharmacy.” |
| 3. Hypothesis & Prediction | What do we predict will happen as a result of this change? Why do we think this will work? Be quantitative if possible. | Analyst: Insists on a falsifiable prediction. This is the core of the scientific method.
Example: “We predict that having a technician in the ED will reduce the time from order entry to verification by 50% (from 10 min to 5 min) because it will eliminate phone call tag.” |
| 4. Plan for the Test (The “Do”) | Who will be involved? What tasks need to be done? When and where will the test occur? For how long? What could go wrong? | Analyst: Focuses on the scope. Champions the “start small” philosophy.
Example: “Who: Technician Jane Doe. When: Wed, 10am-2pm (4 hours). Where: ED medication room. Risk: Central pharmacy may feel disconnected.” |
| 5. Data Collection Plan | What specific metrics (leading and lagging) will we collect? Who is responsible for collecting them? Where will the data come from? What qualitative feedback do we need? | Analyst: This is your prime responsibility. You define the data dictionary, create the collection tool (if needed), and specify the report queries.
Example: “I will pull EHR timestamps for all ED STAT orders during the test period for the TAT metric (quantitative). The ED nurse manager will conduct brief interviews with 3 nurses and Jane Doe at the end of the test to get feedback (qualitative).” |
The “Do” Phase: Executing the Experiment
This phase is about disciplined execution. The goal is to run the test exactly as described in the Plan. However, the real world is messy, and deviations often occur. The key is to meticulously document everything.
Resist the Urge to “Widen the Test” Mid-Stream
A common failure mode occurs when a test starts to show early promise. A manager might say, “This is working great! Let’s extend it for the whole day!” or “Let’s have the other two shifts try it too!” This is a critical error that invalidates the test. It breaks the small-scale, controlled nature of the experiment and makes the “Study” phase impossible. Your role as the data steward is to gently but firmly hold the team to the original plan. “That’s great feedback. Let’s complete this 4-hour cycle as planned, study the results, and then our next PDSA can be to test it for a full shift.”
The analyst’s role in the “Do” phase is often one of observation and support. You are there to ensure the data is being captured correctly, answer any questions about the metrics, and help troubleshoot any technology issues that arise. This is also your opportunity to perform a “Gemba walk”—a core principle from Lean methodology where you go to the actual place where the work is done to observe the process firsthand. The insights you gain from watching the workflow in action are often more valuable than the numbers themselves.
The “Study” Phase: From Raw Data to Actionable Insight
This is where your analytical skills truly shine. The “Study” phase is a formal analysis of the results, a comparison of those results against the predictions made in the “Plan” phase, and a synthesis of what was learned. It is not enough to simply present a table of numbers. You must tell a story with the data.
Step 1: Analyze the Quantitative Data. This involves running your queries and visualizing the data. For improvement projects, one of the most powerful tools is a Run Chart. A run chart plots a metric over time, allowing you to see trends, shifts, and patterns. Annotating the run chart with the date and time of your PDSA test is a simple but incredibly effective way to visualize its impact.
Step 2: Compare Results to Predictions. This is the step most often skipped, and it’s the most critical for learning. Did the results match your hypothesis? If not, why not? This is where true insight is generated.
Step 3: Synthesize Qualitative Data. What was the feedback from the staff? What were the unexpected observations? The story behind the numbers is often found in these conversations. The fact that the STAT TAT only decreased by 2 minutes might be explained by the nurse who said, “Having the tech here was great, but the tube system was down for an hour, so we had to wait for runners anyway.” This context is vital.
Example: “Study” Phase for the ED Tech PDSA
Quantitative Analysis: “The run chart of ED STAT TAT shows a baseline median of 25 minutes. During the 4-hour test period, the median TAT for the 15 STAT orders was 23 minutes, a decrease of 2 minutes. This did not meet our predicted decrease of 5 minutes. The verification-to-administration portion of the time remained unchanged, but the order-to-verification time decreased from a median of 10 minutes to 8 minutes.”
Comparison to Prediction: “Our hypothesis that an ED tech would reduce order-to-verification time was correct, but the magnitude of the effect was smaller than predicted. The overall TAT was significantly impacted by an unexpected downtime of the pneumatic tube system during the test.”
Qualitative Synthesis: “Feedback from nursing was overwhelmingly positive. They reported feeling more connected to the pharmacy and appreciated the immediate clarification of orders. The tech reported feeling effective but noted that 3 of the 15 STAT orders were for medications not stocked in the ED ADC, requiring a delay for central pharmacy delivery. This was an unappreciated cause of delay.”
Summary of Learning: “We learned that an ED tech is a promising intervention for improving communication and reducing initial verification delays. However, we also learned that the primary drivers of our overall STAT TAT are likely due to delivery logistics (tube system reliability and ADC stockouts), not just verification delays.”
The “Act” Phase: Deciding What’s Next
Based on the learning from the “Study” phase, the team must make a decision. This isn’t about judging the success or failure of the test, but about deciding the most logical next step to continue the improvement journey. There are three primary paths.
Adopt
The change was successful and resulted in the desired improvement without significant drawbacks. The next step is to plan for a wider implementation (e.g., expand the ED tech to cover a full shift).
Adapt
The change showed promise but needs modification. It had some positive effects but also created new problems or the effect wasn’t as large as hoped. The next step is a new PDSA cycle testing a modified version of the change.
Abandon
The change did not result in any improvement, or it created more problems than it solved. This is not a failure; it is successful learning. The team avoids wasting resources on a bad idea and can now pivot to testing a different hypothesis.
For our ED tech example, the clear decision is to Adapt. The next PDSA cycle will not be a simple re-run. Based on the learning, the next test might be: “We will test the ED tech concept again, but for this cycle, we will also proactively review the ED ADC PAR levels and ensure the top 20 most common STAT meds are stocked to test the hypothesis that improving ADC availability in combination with the tech will have a greater impact on overall TAT.”
And thus, one cycle flows into the next, each one building on the learning of the last, creating a relentless, iterative ramp of improvement.