University of Michigan Health Department of Urology
Original Research — Quality Improvement Protocol

Development and Implementation of a Predictive Analytics–Driven No-Show and Late Cancellation Reduction Program in an Academic Urology Department

Tyler Hughes, MBA
Department of Urology, Michigan Medicine, University of Michigan, Ann Arbor, MI
March 2026 Protocol v1.0 IRB: Pending Determination
15%
Current Rate
≤10%
Target Rate
24
Predictor Variables
3
Intervention Arms

Study Summary

Background: Outpatient no-shows and late cancellations (<24 hrs) represent a significant source of revenue loss, capacity waste, and access inequity in academic urology practices. Published no-show rates in urology range from 10–23%, with limited evidence for specialty-specific predictive interventions.1,2

Objective: To develop and implement a 24-factor predictive risk model integrated with a tiered intervention bundle (AI-powered outreach, strategic overbooking, rapid waitlist backfill) to reduce the no-show/late cancellation rate from 15% to ≤10% within 6 months across all U-M Urology outpatient sites.

Methods: Retrospective analysis of HSDW scheduling data for model development, followed by prospective deployment in a pre/post quasi-experimental design. Primary outcome: composite no-show and late cancellation rate. Secondary: slot utilization, backfill rate, revenue recovery, equity impact.

Expected Significance: One of the first urology-specific predictive model–driven no-show interventions with prospective outcome data, addressing a gap identified by Carreras-Garcia et al. (2023).3

Keywords: no-show, predictive analytics, urology, quality improvement, overbooking, waitlist management, patient access, machine learning

Background and Significance

The clinical and financial burden of outpatient no-shows is well-documented but poorly addressed in surgical subspecialties.

Outpatient no-show rates in the US range from 5–55%, with a weighted average of 15–20% across academic medical centers.3,4 In urology, Miah et al. reported rates of 10–23% with concentration among younger patients, Medicaid beneficiaries, and appointments scheduled >30 days out.1

Desai et al. demonstrated that each additional week of lead time increases no-show probability by approximately 14% in an academic urology practice.2 A 2023 JAMIA systematic review found high-certainty evidence for model-driven text reminders but very low certainty for overbooking interventions, and identified no urology-specific predictive intervention studies.3

At the University of Michigan Department of Urology, the current 15% rate translates to approximately 30 empty slots per day and an estimated $2.7M–$4.9M in annual lost revenue, including downstream surgical and procedural revenue.

Literature Gap: No published study combines predictive risk scoring with a multi-lever intervention bundle (outreach + overbooking + backfill) in a urology-specific setting. This represents a clear opportunity for both operational impact and scholarly contribution.
Daily Lost Slots
~30
at 15% rate
Annual Revenue Loss
$2.7–4.9M
estimated

Predictive Risk Model

24 variables connected to the HSDW data warehouse via Tableau Server. Every appointment receives a composite no-show probability score.

Table 1. Active (n=15) and planned (n=9) predictor variables with evidence classification.
#VariableHSDW SourceEvidenceStatus
1Confirmation statusConfirm StatusHighActive
2Day of weekCal Day NameModActive
3Scheduled hourAPPT_SCH_TIMEModActive
4Visit modality (virtual/in-person)Visit Type CategoryHighActive
5Clinic locationLocation Long DescModActive
6Patient zip codePatient ZipHighActive
7Patient sexPatient GenderLowActive
8Patient race/ethnicityPatient RaceModActive
9Patient ageAge In Days At ApptModActive
10Provider divisionDept Division DescLowActive
11Provider type (MD/APP)Provider TypeModActive
12Provider sexProvider Gender DescLowActive
13SeasonalityCal Appt Sch DateModActive
14Reschedule countAPPT_RESCHEDULE_NUMHighActive
15Appointment status historyAppt StatusHighActive
16Prior no-show countHSDW historicalV.HighWk 1
17Lead time (days out)Created vs. Sch DateV.HighWk 1
18Insurance typePayor Plan PrimaryHighWk 2
19Distance to clinicZip-to-zip calcHighWk 2
20MyChart activationPortal status flagHighWk 2
21Primary diagnosisDx Cd / Dx GroupModWk 3
22Comorbidity burden (CCI)Problem listModWk 4
23Marital statusMarital Status KeyLowWk 4
24Patient–provider race concordancePatient + ProviderModWk 4

Priority note: Variables 16–17 (prior no-show history, lead time) are the two highest-yield additions. Patients with ≥2 prior no-shows exhibit 3–5× the baseline non-attendance rate.4 Each additional week of lead time increases odds by ~14%.2

Risk Stratification & Model Performance

Every appointment is scored and classified into an operational tier that determines intervention intensity.

Low Risk
0–20% probability
~60% of scheduled appointments
  • Standard MyChart reminder (7 days)
  • Automated text confirmation (48 hrs)
  • No staff intervention
Moderate Risk
21–45% probability
~25% of scheduled appointments
  • 72-hr confirm/cancel/reschedule link
  • IVR call at 48 hrs if unconfirmed
  • Live staff outreach at 24 hrs
  • Telehealth conversion offer
High Risk
≥46% probability
~15% of scheduled appointments
  • Personal staff call at 72 hrs
  • Second attempt + text at 48 hrs
  • Clinical staff call at 24 hrs
  • Overbooking activated
  • Waitlist pre-staged for backfill
Table 2. Model performance targets for clinical utility.
MetricTargetRationale
AUC-ROC≥0.75Standard threshold for clinical decision support5
Sensitivity (High-Risk tier)≥70%Capture majority of actual no-shows
Positive Predictive Value≥40%High-risk flags are actionable, not noise
Calibration±5%Scores map to real probabilities for overbooking

Intervention Protocols

Three complementary operational levers, each calibrated to appointment risk tier.

I
Risk-Stratified Patient Outreach

Outreach intensity scales by tier. Low-risk: standard automated reminders. Moderate-risk: enhanced cadence with one-tap confirm links, IVR call, and staff escalation. High-risk: personal scheduling calls at 72 hrs, second attempt at 48 hrs, and clinical staff (MA/RN) outreach at 24 hrs. All attempts documented in Epic.

Trigger: 72 → 48 → 24 hrs, escalating by tier
II
Strategic Overbooking

Applied selectively to sessions containing high-risk appointments. Restricted to follow-up/short visit types; new patient consults are never overbooked.

Half-day (8–12 slots): Add up to 1 overbooked slot
Full-day (16–24 slots): Add up to 2 slots, mid-session
Safety: Pull back if wait time >25 min or overtime >10%
Target: ≥75% overbooking conversion rate
Trigger: ≥1 high-risk or ≥2 moderate-risk in session
III
Rapid Waitlist Backfill

Cancellation within 72 hrs triggers ASAP waitlist activation within 5 minutes. Automated SMS to top 5–10 eligible patients; first confirmed response claims the slot. “Hot list” maintained for short-notice patients.

Trigger: Any cancellation within 72 hrs · Target: ≥70% backfill, <2 hr fill time

Implementation Plan

Phased rollout: operational pilot within 4 weeks, full scale within 12, publication-ready data by month 6.

Phase I
Foundation
Weeks 1–4
  • Add no-show history + lead time
  • Establish baseline by site
  • Validate risk tiers
  • Submit IRBMED
  • Build KPI dashboard
Phase II
Pilot
Weeks 5–8
  • Deploy at 2–3 pilot sites
  • Activate overbooking rules
  • Train scheduling staff
  • Weekly KPI tracking
Phase III
Scale
Weeks 9–12
  • All Urology sites live
  • Add remaining variables
  • Refine thresholds
  • 3-month data collection
Phase IV
Publish
Months 4–6
  • Analyze outcome data
  • Subgroup analyses
  • Draft manuscript
  • Submit to journal

Infrastructure: No new capital expenditure required. Built entirely on existing Epic/MiChart, HSDW data warehouse, and Tableau Server infrastructure.

Outcome Measurement Framework

All metrics tracked weekly from Day 1, serving dual purposes: operational management and prospective research data.

Table 3. Primary and secondary outcomes with targets.
MetricBaselineTargetFrequency
No-show & late cancellation rate15%≤10%Weekly
Slot utilization rate~85%≥92%Weekly
Backfill rateTBD≥70%Weekly
Time-to-fillTBD<2 hoursWeekly
Overbooking conversionN/A≥75%Weekly
Model AUC-ROCTBD≥0.75Monthly
Revenue recovered$0TrackedMonthly

Balancing Metrics (Safety)

Wait Time
≤+5 min
vs. baseline
Overtime
≤+10%
session overruns
Satisfaction
No decline
CG-CAHPS
Equity
Monitored
by race/ins/zip

Financial Impact Projection

Conservative estimates based on published per-visit revenue for urology outpatient encounters.

Projected Annual Revenue Recovery
$750,000 – $1,125,000
Excludes downstream surgical revenue (2–3× multiplier in urology).
10
Recovered visits/day
$300–450
Revenue per visit
250
Working days/year
2–3×
Surgical multiplier

Published systematic reviews report that predictive model–driven interventions achieve 3–8 percentage point reductions,3 consistent with our 5-point target. The projection excludes downstream surgical volume from converted outpatient consults, which represents the majority of revenue in a surgical specialty.

Publication Strategy

Executing this protocol simultaneously generates the dataset, methods, and results for an original research manuscript.

Proposed Title

“Development and Implementation of a Predictive Analytics–Driven No-Show Reduction Program in an Academic Urology Department: A Prospective Quality Improvement Study”

Target Journals

Primary Target
Urology Practice
IF ~1.8 · AUA

Optimal fit. Practice management focus. Published the Desai et al. and Ziemba et al. studies.2,6

High Impact
The Journal of Urology
IF ~7.5 · AUA

Strong option if results are dramatic or sample size is large.

QI-Focused
BMJ Open Quality
Open Access · SQUIRE 2.0

Rapid turnaround. Well-suited to QI framing.

Informatics
JAMIA
IF ~7.9 · Oxford

If the predictive model is the primary contribution.

Reporting Framework

Manuscript structured per SQUIRE 2.0 guidelines. Introduction: burden + gap. Methods: 24-variable model + tiered intervention. Results: pre/post + subgroup. Discussion: equity, generalizability, limitations.

Cited Literature

  1. Miah SJ, Koh CJ, Patel DP, et al. No-Shows in Adult Urology Outpatient Clinics: Economic and Operational Implications. Urology Practice. 2020;7(2):89–95.
  2. Desai RJ, Beganovic M, Leonard KM, et al. Increased Time between Scheduling Date and Appointment Date Results in Increased No-Show Rates in the Academic Urology Practice. Urology Practice. 2021;8(5):558–564.
  3. Carreras-Garcia D, Delgado-Gomez D, Llorente-Fernandez F, Arribas-Gil A. Predictive model-based interventions to reduce outpatient no-shows: a rapid systematic review. J Am Med Inform Assoc. 2023;30(3):559–568.
  4. Dantas LF, Fleck JL, Oliveira FLC, Hamacher S. No-shows in appointment scheduling — a systematic literature review. Health Policy. 2018;122(4):412–421.
  5. Nelson SJ, Goel AN, Gould DJ, et al. Predicting patient no-shows using machine learning: A comprehensive review and future research agenda. Inform Med Unlocked. 2025;53:101628.
  6. Ziemba JB, Anger JT, Englesbe MJ, et al. Reducing Missed Appointments in a Pediatric Urology Outpatient Setting with an Automated Text Reminder System. Urology Practice. 2023;10(4):374–380.
  7. Kheirkhah P, Feng Q, Travis LM, et al. Prevalence, predictors and economic consequences of no-shows. BMC Health Serv Res. 2016;16:13.
arrow keys or swipe to navigate