Section 3: Measuring Program Effectiveness
Learn how to answer the most critical question: “Is this program working?” Discover the metrics and methods used to determine if a PA program is successfully reducing costs and improving safety.
Measuring Program Effectiveness
From Implementation to Impact: Quantifying the Value of Utilization Management.
26.3.1 The “Why”: The Burden of Proof in Managed Care
In the world of utilization management, implementing a new prior authorization program is only the beginning of the story. The far more important chapter is proving that the program actually works. A PA program is not free; it has direct administrative costs (the salaries of the pharmacists and technicians reviewing cases, the IT infrastructure) and indirect costs (provider abrasion, potential delays in care). Therefore, every PA program exists under a constant, intense scrutiny to justify its own existence. The central question, asked relentlessly by clients, consultants, and regulators, is not “Are you busy?” but “Are you adding value?”
This is the “burden of proof” in managed care. It is not enough to simply state that a program promotes clinically appropriate, cost-effective therapy. You must prove it with data. Did the step-therapy program for rheumatoid arthritis actually reduce the use of expensive biologics? By how much? Did it save the client money? Did the opioid safety program actually lower the average Morphine Milligram Equivalent (MME) dose for the member population? Did it reduce the incidence of dangerous co-prescribing with benzodiazepines? Answering these questions with verifiable, objective metrics is the entire purpose of program effectiveness analysis.
For you, the pharmacist, understanding this process is a critical career differentiator. It elevates you from a clinical decision-maker on a single case to a strategic thinker who understands the broader impact of your work. It enables you to participate in conversations about program design, to identify when a PA criterion is too loose or too restrictive, and to articulate the value your department provides to the entire organization. When a client asks, “Why should I pay for your PA services?” the answer cannot be a vague promise of quality. The answer must be a dashboard of key performance indicators, a report of tangible cost savings, and a demonstration of improved patient safety metrics. This section will provide you with the framework and vocabulary to understand and contribute to that essential conversation.
Retail Pharmacist Analogy: Justifying Your New Clinical Service
Imagine you’ve convinced your pharmacy owner to invest in a new, advanced blood pressure monitoring service for high-risk patients. You’ve purchased a calibrated machine, developed patient education materials, and dedicated two hours of your time each day to the program. The owner has given you six months to prove the service is worthwhile.
How do you measure its “effectiveness” at the end of those six months? You wouldn’t just hand the owner a logbook showing you performed 200 blood pressure checks. That’s a measure of activity, not impact. That’s the equivalent of a PBM saying, “We processed 50,000 PAs.” It’s a meaningless number in isolation.
Instead, to prove your program’s value, you would present a report based on outcomes and ROI:
- Financial Metrics: You demonstrate that by identifying 15 patients with uncontrolled hypertension and working with their doctors to optimize therapy, you generated 45 new prescriptions for add-on agents, resulting in $X of new gross margin for the pharmacy. You also show that the service has increased patient loyalty, with participants transferring an average of two additional prescriptions to your store. You calculate the total new profit and compare it to the cost of your time and the machine to generate a clear Return on Investment (ROI).
- Clinical & Safety Metrics: You present anonymized, aggregated data showing that for the 30 patients enrolled in your program, the average systolic blood pressure dropped by 12 mmHg. You highlight two specific cases where you caught a dangerously high reading and instructed the patient to go to the emergency room, potentially preventing a stroke. This demonstrates a tangible improvement in health outcomes and patient safety.
You are not just showing that you were busy; you are proving that your service saved lives and made the pharmacy money. This is the exact same logic a PBM must use to justify its PA programs to its clients. The methodologies are more complex, and the scale is vastly larger, but the fundamental principle is identical: you must translate clinical activity into quantifiable financial and health outcomes.
26.3.2 The Core Domains of Program Measurement
Every analysis of a utilization management program’s effectiveness can be broken down into two primary domains. While they are interconnected, they answer two fundamentally different questions for a PBM’s client. A successful program must demonstrate value in both areas to be considered a true success.
Financial Impact
This domain answers the client’s question: “Did this program save me money?”
Metrics in this category are focused on quantifiable cost reduction and financial efficiency. They include:
- Net Cost Savings
- Drug Spend Trend Reduction
- Return on Investment (ROI)
- Cost Avoidance
Clinical & Safety Impact
This domain answers the client’s question: “Did this program improve the health and safety of my members?”
Metrics in this category are focused on quality of care, adherence to clinical guidelines, and risk reduction. They include:
- Guideline Concordance Rate
- Opioid Safety Improvements (e.g., MME reduction)
- Medical Cost Offset
- Adherence to appropriate therapy
26.3.3 Deep Dive: Measuring Financial Impact
Quantifying the financial value of a PA program is the most common request from PBM clients. The methodologies range from simple and intuitive to complex and statistical. As a pharmacist, you won’t be building the financial models, but you must understand the concepts to interpret the results and speak intelligently about your program’s value.
Masterclass Table: Financial Metrics for PA Programs
| Metric | Formula / Calculation Method | Strengths & Weaknesses | Pharmacist’s Clinical Interpretation | 
|---|---|---|---|
| Gross Savings (Cost Avoidance) | Number of Denied/Avoided Claims x Cost of Denied Drug | Strength: Simple to calculate and understand. Weakness: Highly misleading. It assumes that if the PA was denied, the patient received no alternative therapy, which is almost never true. It’s often called “fantasy savings.” | You should view this number with extreme skepticism. It’s a marketing figure, not a true measure of impact. Your clinical insight is that denying a biologic for RA doesn’t mean the patient’s RA goes untreated; it means they likely used a different, less costly biologic or DMARD. | 
| Net Savings | Gross Savings – Cost of Alternative Therapy Used | Strength: Much more accurate as it accounts for the cost of the drug that was actually dispensed. Weakness: Can be difficult to definitively link a specific alternative therapy to a specific denial without integrated data. | This is a more clinically sound metric. It reflects the real-world scenario of therapeutic substitution. For example, Net Savings = (Cost of Humira) – (Cost of generic methotrexate + sulfasalazine) used instead. | 
| Drug Trend Management | (Client’s PMPM Trend with Program) – (Benchmark PMPM Trend without Program) | Strength: The gold standard. It measures the program’s ability to “bend the cost curve” relative to the overall market. Weakness: Requires access to good benchmark data and can be influenced by other factors (e.g., changes in client demographics). | This shows the strategic value. If national specialty costs grew 15% last year, but your client’s only grew 9% after implementing your PA program, you can claim a 6% trend reduction. Your role is to explain the clinical policies that led to that reduction. | 
| Return on Investment (ROI) | $$ \frac{(\text{Net Savings})}{(\text{Program Administrative Costs})} $$ | Strength: The ultimate business metric. It directly answers the client’s question, “Is this worth the fee?” Weakness: Only as accurate as the “Net Savings” figure used in the numerator. | An ROI of 3:1 means for every $1 the client paid for the PA program, they saved $3 in drug costs. Your clinical work—making appropriate denials and recommending cost-effective alternatives—is the engine that drives this ROI. | 
Putting It Together: A Net Savings Calculation
Let’s walk through a simplified example of how net savings would be calculated for a new PA program on PCSK9 inhibitors for hyperlipidemia.
Scenario: In one year, a PBM client receives 100 PA requests for Repatha (evolocumab).
- Cost of Repatha: ~$500 per claim.
- Program Outcome: 60 requests are approved (meet criteria). 40 requests are denied (do not meet criteria, e.g., patient hasn’t failed maximum statin therapy).
- Alternative Therapy: Of the 40 denied patients, claims analysis shows that 30 of them subsequently filled a prescription for ezetimibe. 10 received no alternative.
- Cost of Ezetimibe: ~$10 per claim.
Calculation Steps:
- Calculate Gross Savings (Cost Avoidance):
 40 denied claims x $500/claim = $20,000
- Calculate Cost of Alternative Therapy:
 30 patients x $10/claim = $300
- Calculate Net Savings:
 $20,000 (Gross Savings) – $300 (Alternative Cost) = $19,700
The PBM can now report to the client that the PA program generated $19,700 in net savings for this drug alone, a far more credible and defensible figure than the “gross savings” of $20,000.
26.3.4 Deep Dive: Measuring Clinical & Safety Impact
While financial metrics are essential for demonstrating value to the client’s finance department, clinical metrics are what resonate with their chief medical officer and are required for quality accreditation. These metrics prove that the PA program is not just a blunt cost-cutting tool but a sophisticated clinical instrument that improves the quality and safety of care for members. Your clinical expertise is central to interpreting and reporting on these outcomes.
Masterclass Table: Key Clinical & Safety Metrics
| Metric | How It’s Measured | What Success Looks Like | The Pharmacist’s Contribution | 
|---|---|---|---|
| Guideline Concordance Rate | For a specific drug, what % of PA approvals were for members whose medical claims show a diagnosis and history consistent with national guidelines (e.g., NCCN for cancer, ADA for diabetes)? | A high concordance rate (>95%) shows the PA criteria are well-designed and are successfully channeling the drug to the appropriate patient population. | As a reviewing pharmacist, your adherence to the criteria is what generates this metric. You ensure that only patients who meet guidelines are approved, directly contributing to a high concordance rate. | 
| Reduction in Average Daily MME | Using pharmacy claims, calculate the average Morphine Milligram Equivalent dose per day across all chronic opioid users. Measure this before and after implementing an opioid safety PA program. | A significant decrease in the average daily MME, indicating a reduction in high-risk, high-dose opioid prescribing. | By enforcing hard MME limits (e.g., >90 MME/day requires PA) and recommending dose tapers or non-opioid alternatives during your reviews, you directly drive this number down. | 
| Reduction in Opioid/BZD Overlap | Identify members with overlapping days’ supply for any opioid and any benzodiazepine. Measure the prevalence (% of members) of this overlap before and after a safety edit is implemented. | A sharp drop in the number of members with concurrent opioid and benzodiazepine claims, reducing the risk of respiratory depression and overdose. | When the safety edit fires at the point of sale, a pharmacist may review the case to determine if the overlap is clinically critical. Your recommendations to use alternative anxiolytics are key. | 
| Medical Cost Offset | A complex analysis comparing the total medical spend (hospitalizations, ER visits, procedures) for a cohort of patients who received a drug via a PA program vs. a control group who did not. | Demonstrating that the higher upfront cost of a specialty drug led to a greater reduction in downstream medical costs, resulting in a net negative total cost of care. | By ensuring appropriate access to high-value drugs like those for Hepatitis C or heart failure, you are making the clinical decision that is hypothesized to reduce long-term medical events. This analysis proves if that hypothesis was correct. | 
26.3.5 The Analyst’s Playbook: Common Study Designs
To isolate the true effect of a PA program and prove that it was the program itself—and not some other market factor—that caused the observed changes in cost and utilization, analysts rely on two primary study designs. Understanding the logic of these designs is key to critically evaluating any effectiveness report.
Pre/Post Analysis
This is the most straightforward design. It involves measuring your key metric for a set period (e.g., 12 months) immediately before the program starts (the “pre” period) and comparing it to the same metric for the same period immediately after the program starts (the “post” period).
PCSK9 Inhibitor PMPM Trend
PA Program Launch
Interpretation: The PMPM cost for the drug class was rising sharply (solid line). After the PA program was launched, the trend “bent” and flattened out (dashed line). The difference between the actual post-launch trend and the projected trend is the savings generated by the program.
Limitation: How do we know the trend would have continued rising? Maybe a competitor drug launched, naturally flattening the curve. This is where control groups become essential.
Control Group Analysis
This is a more rigorous design that compares the outcomes of the client who implemented the program (the “test” group) to a statistically similar group of members who did not have the program (the “control” group) over the same time period.
Change in Specialty Drug PMPM (12 Months)
Control Group
+15%
Client (Test) Group
+9%
Interpretation: In the control group (representing the national average), specialty drug costs increased by 15%. For our client with the new, aggressive PA program, costs only increased by 9%. Therefore, the net impact of the program was a 6% trend reduction (15% – 9% = 6%).
Strength: This method isolates the program’s effect from broad market trends, providing a much more accurate and defensible measurement of the PA program’s true impact.
