Background: Outpatient no-shows and late cancellations (<24 hrs) represent a significant source of revenue loss, capacity waste, and access inequity in academic urology practices. Published no-show rates in urology range from 10–23%, with limited evidence for specialty-specific predictive interventions.1,2
Objective: To develop and implement a 24-factor predictive risk model integrated with a tiered intervention bundle (AI-powered outreach, strategic overbooking, rapid waitlist backfill) to reduce the no-show/late cancellation rate from 15% to ≤10% within 6 months across all U-M Urology outpatient sites.
Methods: Retrospective analysis of HSDW scheduling data for model development, followed by prospective deployment in a pre/post quasi-experimental design. Primary outcome: composite no-show and late cancellation rate. Secondary: slot utilization, backfill rate, revenue recovery, equity impact.
Expected Significance: One of the first urology-specific predictive model–driven no-show interventions with prospective outcome data, addressing a gap identified by Carreras-Garcia et al. (2023).3
The clinical and financial burden of outpatient no-shows is well-documented but poorly addressed in surgical subspecialties.
Outpatient no-show rates in the US range from 5–55%, with a weighted average of 15–20% across academic medical centers.3,4 In urology, Miah et al. reported rates of 10–23% with concentration among younger patients, Medicaid beneficiaries, and appointments scheduled >30 days out.1
Desai et al. demonstrated that each additional week of lead time increases no-show probability by approximately 14% in an academic urology practice.2 A 2023 JAMIA systematic review found high-certainty evidence for model-driven text reminders but very low certainty for overbooking interventions, and identified no urology-specific predictive intervention studies.3
At the University of Michigan Department of Urology, the current 15% rate translates to approximately 30 empty slots per day and an estimated $2.7M–$4.9M in annual lost revenue, including downstream surgical and procedural revenue.
24 variables connected to the HSDW data warehouse via Tableau Server. Every appointment receives a composite no-show probability score.
| # | Variable | HSDW Source | Evidence | Status |
|---|---|---|---|---|
| 1 | Confirmation status | Confirm Status | High | Active |
| 2 | Day of week | Cal Day Name | Mod | Active |
| 3 | Scheduled hour | APPT_SCH_TIME | Mod | Active |
| 4 | Visit modality (virtual/in-person) | Visit Type Category | High | Active |
| 5 | Clinic location | Location Long Desc | Mod | Active |
| 6 | Patient zip code | Patient Zip | High | Active |
| 7 | Patient sex | Patient Gender | Low | Active |
| 8 | Patient race/ethnicity | Patient Race | Mod | Active |
| 9 | Patient age | Age In Days At Appt | Mod | Active |
| 10 | Provider division | Dept Division Desc | Low | Active |
| 11 | Provider type (MD/APP) | Provider Type | Mod | Active |
| 12 | Provider sex | Provider Gender Desc | Low | Active |
| 13 | Seasonality | Cal Appt Sch Date | Mod | Active |
| 14 | Reschedule count | APPT_RESCHEDULE_NUM | High | Active |
| 15 | Appointment status history | Appt Status | High | Active |
| 16 | Prior no-show count | HSDW historical | V.High | Wk 1 |
| 17 | Lead time (days out) | Created vs. Sch Date | V.High | Wk 1 |
| 18 | Insurance type | Payor Plan Primary | High | Wk 2 |
| 19 | Distance to clinic | Zip-to-zip calc | High | Wk 2 |
| 20 | MyChart activation | Portal status flag | High | Wk 2 |
| 21 | Primary diagnosis | Dx Cd / Dx Group | Mod | Wk 3 |
| 22 | Comorbidity burden (CCI) | Problem list | Mod | Wk 4 |
| 23 | Marital status | Marital Status Key | Low | Wk 4 |
| 24 | Patient–provider race concordance | Patient + Provider | Mod | Wk 4 |
Priority note: Variables 16–17 (prior no-show history, lead time) are the two highest-yield additions. Patients with ≥2 prior no-shows exhibit 3–5× the baseline non-attendance rate.4 Each additional week of lead time increases odds by ~14%.2
Every appointment is scored and classified into an operational tier that determines intervention intensity.
| Metric | Target | Rationale |
|---|---|---|
| AUC-ROC | ≥0.75 | Standard threshold for clinical decision support5 |
| Sensitivity (High-Risk tier) | ≥70% | Capture majority of actual no-shows |
| Positive Predictive Value | ≥40% | High-risk flags are actionable, not noise |
| Calibration | ±5% | Scores map to real probabilities for overbooking |
Three complementary operational levers, each calibrated to appointment risk tier.
Outreach intensity scales by tier. Low-risk: standard automated reminders. Moderate-risk: enhanced cadence with one-tap confirm links, IVR call, and staff escalation. High-risk: personal scheduling calls at 72 hrs, second attempt at 48 hrs, and clinical staff (MA/RN) outreach at 24 hrs. All attempts documented in Epic.
Applied selectively to sessions containing high-risk appointments. Restricted to follow-up/short visit types; new patient consults are never overbooked.
Cancellation within 72 hrs triggers ASAP waitlist activation within 5 minutes. Automated SMS to top 5–10 eligible patients; first confirmed response claims the slot. “Hot list” maintained for short-notice patients.
Phased rollout: operational pilot within 4 weeks, full scale within 12, publication-ready data by month 6.
Infrastructure: No new capital expenditure required. Built entirely on existing Epic/MiChart, HSDW data warehouse, and Tableau Server infrastructure.
All metrics tracked weekly from Day 1, serving dual purposes: operational management and prospective research data.
| Metric | Baseline | Target | Frequency |
|---|---|---|---|
| No-show & late cancellation rate | 15% | ≤10% | Weekly |
| Slot utilization rate | ~85% | ≥92% | Weekly |
| Backfill rate | TBD | ≥70% | Weekly |
| Time-to-fill | TBD | <2 hours | Weekly |
| Overbooking conversion | N/A | ≥75% | Weekly |
| Model AUC-ROC | TBD | ≥0.75 | Monthly |
| Revenue recovered | $0 | Tracked | Monthly |
Conservative estimates based on published per-visit revenue for urology outpatient encounters.
Published systematic reviews report that predictive model–driven interventions achieve 3–8 percentage point reductions,3 consistent with our 5-point target. The projection excludes downstream surgical volume from converted outpatient consults, which represents the majority of revenue in a surgical specialty.
Executing this protocol simultaneously generates the dataset, methods, and results for an original research manuscript.
“Development and Implementation of a Predictive Analytics–Driven No-Show Reduction Program in an Academic Urology Department: A Prospective Quality Improvement Study”
Optimal fit. Practice management focus. Published the Desai et al. and Ziemba et al. studies.2,6
Strong option if results are dramatic or sample size is large.
Rapid turnaround. Well-suited to QI framing.
If the predictive model is the primary contribution.
Manuscript structured per SQUIRE 2.0 guidelines. Introduction: burden + gap. Methods: 24-variable model + tiered intervention. Results: pre/post + subgroup. Discussion: equity, generalizability, limitations.