Key Takeaways
- A channel-agnostic message bus that routes engagement intents to SMS, email, push, or IVR based on patient response patterns is simpler to maintain than channel-specific notification logic scattered across services.
- Message scheduling is harder than message sending. Personalized delivery windows based on observed patient behavior require a feedback loop from engagement signals back into the scheduler.
- Streak-based behavioral nudges ("You have taken your medication 14 days in a row") outperformed loss-framed, gain-framed, and neutral reminder messages in our A/B tests, and the effect did not decay over the 6-month observation window.
- The patient portal reached 72% monthly active users by doing less, not more -- surfacing only medication tracking, lab results, and messaging on the home screen.
- Self-reported adherence via the portal overstated actual adherence (from pharmacy fill data) by about 18%. Ground-truth data sources should be integrated before building engagement logic on top of self-reports.
The Adherence Crisis in Chronic Care
Medication non-adherence in chronic conditions like diabetes, hypertension, and COPD is a well-documented problem. Adherence rates for chronic medications hover around 50-60% in most studies. For healthcare organizations operating under value-based care contracts, non-adherence directly affects quality scores, readmission rates, and reimbursement.
The typical first attempt at solving this is automated appointment reminders -- a robocall the day before a visit. This addresses one narrow slice of the problem (no-shows) and ignores everything else: medication gaps, care plan confusion, inability to navigate scheduling systems, and lack of understanding about why a particular treatment matters. We were asked to build something broader.
Scoping the Engagement Problem
Before writing code, we spent time understanding why patients disengage. The patterns were predictable: patients forget doses because the regimen is complex, they stop medications that do not produce noticeable symptoms (statins, ACE inhibitors), they cannot figure out how to schedule follow-ups, and they do not understand what their lab numbers mean. The engineering problem, then, was not "send more reminders" but "deliver the right information through the right channel at the right time, and track whether it actually worked."
The system needed to support SMS, email, push notifications, and interactive voice response (IVR), integrate with multiple EHR systems and pharmacy data sources, and provide care coordinators with a dashboard to manage the patients who fall through the automated cracks. This post focuses on the architecture of the notification system, the scheduling logic, and what we learned about engagement measurement.
Omni-Channel Communication Architecture
The core design principle was to separate the engagement intent from the delivery channel. The business logic layer produces a structured engagement event ("remind patient X about medication Y at time Z with urgency level N") and the delivery layer decides how to send it. This separation means adding a new channel (say, WhatsApp) requires only a new subscriber, not changes to the engagement logic.
Message Bus and Channel Selection
Engagement events are published to an Azure Service Bus topic. Channel-specific subscribers consume messages: Twilio for SMS, SendGrid for email, Firebase Cloud Messaging for push, and Twilio Programmable Voice for IVR. Each subscriber handles its own retry logic and delivery confirmation. Delivery receipts flow back to a central engagement tracking service that records whether the message was delivered, opened, and acted upon.
Channel selection starts with the patient's stated preference at onboarding, then adapts based on engagement data. The algorithm is simple: track read/response rates per channel per patient over a 30-day rolling window. If a patient consistently reads emails within 2 hours but ignores SMS, shift priority to email. If push notifications get dismissed without opening, fall back to SMS. The adaptation is not ML -- it is a weighted scoring function that re-evaluates weekly. Over time, read rates improved from around 34% to about 70% as the channel selection converged on each patient's preferred channel.
Message Template Engine
Clinical content is managed through a template builder that supports variable injection, conditional blocks, and A/B variant assignment. Templates are authored by clinical staff, versioned, and require approval before activation. Each template is channel-aware -- an SMS version (limited to 160 characters with a deep link) differs from the email version (rich HTML with educational content).
- SMS: 160 characters max. Includes a short deep link to the patient portal for detail. Delivery rates are consistently high (around 98%) but engagement depends heavily on timing.
- Email: Rich HTML with medication images and one-click action buttons. Open rates settled around 50% once channel selection converged.
- Push notifications: Short alerts with action buttons (confirm, snooze, dismiss). Tap-through rates averaged around 35-40%.
- IVR: Used for high-priority escalations and patients without smartphones. Call completion rates were about 60%, which is lower than digital channels but reaches a population that digital channels miss entirely.
Behavioral Nudge Engine
Reminders are table stakes. The more interesting engineering problem is deciding when and how to frame messages. We built a configurable nudge engine based on four behavioral patterns from the literature: implementation intentions (linking the action to a specific time and context), loss aversion, social proof, and commitment devices (streaks).
Timing Optimization
Fixed-schedule reminders (e.g., send at 8:00 AM every day) are a poor fit for a patient population with varying daily routines. The scheduling service learns each patient's medication-taking window from portal check-in timestamps and adjusts the nudge delivery time to 15 minutes before the observed pattern. For patients without enough data, it defaults to the prescribed time. This is implemented as a per-patient cron schedule stored in a Redis sorted set, evaluated every minute by a scheduler worker.
Personalized timing improved same-day medication confirmation rates by about 25-30% compared to a fixed 8:00 AM send. The improvement was largest for patients with non-standard schedules (shift workers, retirees with late routines), which makes sense -- a reminder at 7:00 AM is useless if the patient wakes at noon.
Message Framing Strategies
We ran A/B tests across four framing strategies over 60 days:
- Neutral: "Time to take your Metformin 500mg." Confirmation rate: ~71%.
- Loss-framed: "Missing your Metformin today could raise your A1C by next visit." Confirmation rate: ~78%.
- Gain-framed: "Taking your Metformin keeps your blood sugar in the target range." Confirmation rate: ~76%.
- Streak-based: "You have taken your Metformin 14 days in a row. Keep it going!" Confirmation rate: ~83%.
The streak-based framing won, and the effect held up over 6 months without visible fatigue. We suspect this works because it reframes adherence from a chore ("take your pill") to a game ("maintain your streak"). Importantly, when a streak breaks, the message shifts to "Start a new streak today" rather than highlighting the lapse. The engine dynamically switches framing if a patient's confirmation rate drops below their personal baseline for three consecutive days, cycling through alternatives to find what re-engages them.
Escalation Protocols
When automated nudges fail (two consecutive missed confirmations or a missed appointment), the system escalates through a defined sequence: an additional SMS with a simplified one-tap confirmation link, then a push notification, then an IVR call. If all automated channels fail within 48 hours, a task is created in the care coordinator's dashboard for manual outreach. This layered approach is modeled as a simple state machine with configurable timeouts at each step.
Compliance Tracking and Risk Scoring
Reacting to missed doses after the fact is less effective than identifying patients who are drifting toward non-adherence before they lapse. We built a scoring model that generates a daily risk score for each patient, enabling care coordinators to prioritize proactive outreach.
Feature Engineering
The model uses about 50 features across four categories: demographics (age, distance from clinic, insurance type), behavioral signals (portal login frequency, message response latency, appointment rescheduling history), clinical indicators (number of active medications, comorbidity count, recent hospitalization), and social factors (transportation access, caregiver presence). We trained a gradient-boosted tree model (XGBoost) on 18 months of historical data.
The three most predictive features were pharmacy refill gap (days since last refill vs. expected refill date), portal login recency, and the number of active medications. Scores above 70 (on a 0-100 scale) trigger proactive outreach. In validation, the model predicted non-adherence events about 2 weeks in advance with roughly 89% precision and 76% recall. The precision-recall trade-off is tunable -- we chose a threshold that favored precision because false-positive outreach wastes coordinator time and can feel intrusive to patients.
Care Coordinator Dashboard
The dashboard is a React application with D3.js visualizations. It surfaces a prioritized patient list sorted by risk score, with drill-down views into each patient's engagement timeline (messages sent, messages read, confirmations, portal logins, pharmacy refill events). Care coordinators can schedule a call, send a custom message, or flag a patient for physician review directly from the dashboard. The data updates every 15 minutes from the engagement tracking service.
The population-level view lets clinical leadership spot systemic patterns. For example, we noticed that patients on more than 5 concurrent medications had roughly 3x the non-adherence risk. This was not a surprise to anyone in clinical practice, but having the data quantified prompted the organization to implement medication therapy management reviews for polypharmacy patients.
Patient Portal Design
Most patient portals have terrible engagement. Industry averages for monthly active users sit around 15-25%. The portals that fail tend to be feature-stuffed: appointment scheduling, bill pay, medical records, messaging, forms, educational content, all crammed into a navigation menu. Patients open the app, cannot find what they need, and do not come back.
Mobile-First Design Decisions
Usage data from the organization's existing portal showed 81% of logins from mobile devices. We built with React Native for iOS and Android, with a responsive web fallback. The key design decision was radical simplicity on the home screen: today's medication schedule, pending lab results, and upcoming appointments. That is it. No hamburger menu with 15 items. Everything a patient needs for their daily health management is visible in the first scroll.
Medication tracking uses a tap-to-confirm interaction. Each medication appears as a card with the drug name, dosage, and a large confirm button. Tapping records the timestamp and updates the streak counter. We added a weekly adherence heat map that makes patterns visible at a glance -- a row of green squares with a red gap on Wednesday tells the patient more than a percentage ever could. The portal reached 72% monthly active users, which we attribute primarily to the simplicity. Patients open the app because they know exactly what they are going to do: confirm their meds and check their labs.
Accessibility and Health Literacy
All patient-facing content is written at a 6th-grade reading level. Lab results include visual indicators (green/yellow/red reference ranges) with plain-language explanations. The portal supports English and Spanish. All interactive elements meet WCAG 2.1 AA, and the medication confirmation flow works with screen readers and voice control. These are not premium features -- they are baseline requirements when your user population includes elderly patients, patients with limited English proficiency, and patients with visual impairments.
Integration with EHR and Pharmacy Systems
A notification system without clinical data integration is just a messaging tool. The engagement platform's value depends on its ability to pull real-time data from EHRs and pharmacy systems, act on it, and write outcomes back to the clinical record.
EHR Integration via FHIR and HL7
We integrated with four EHR systems: Epic (FHIR R4), Athenahealth (proprietary REST API), eClinicalWorks (HL7 v2 ADT/ORM messages), and Allscripts (Unity API). Each integration pulls patient demographics, medication lists, lab results, appointments, and care plans. We built an abstraction layer that normalizes data from all four into a unified patient model. The engagement engine operates on this canonical model, which means adding a new EHR is an integration task, not a rewrite of engagement logic.
Bidirectional sync writes engagement data back as clinical notes. When a patient confirms medication adherence through the portal, a structured note is appended to their chart. When a care coordinator completes a follow-up call, the summary is written back as an encounter note. This closed-loop approach means the care team has visibility into engagement without leaving their EHR.
Pharmacy Data as Ground Truth
We connected to pharmacy benefit managers via NCPDP SCRIPT messaging to pull real-time prescription fill data. This is the closest thing to ground truth for medication adherence -- a patient either filled their prescription or they did not. When a refill is overdue by more than 3 days, the system triggers an adherence intervention. When a fill is processed, the adherence record updates automatically.
- Fill verification: Pharmacy fill data matched self-reported portal adherence about 82% of the time. The 18% gap identified patients who were tapping "confirm" in the portal but not actually filling prescriptions. This was the most important finding of the entire project -- self-reported data was unreliable enough that we could not build trustworthy engagement logic on top of it alone.
- Refill gap detection: Time to detect a missed refill dropped from about 30 days (caught at the next office visit) to 3 days (automated pharmacy data sync).
- Prior authorization alerts: The system detects PA rejections from pharmacy data and notifies the care coordinator to initiate the PA process before the patient runs out of medication.
Results and Clinical Outcomes
Here is what we observed after 8 months of production use. A few caveats: this was not a randomized controlled trial. We compared post-deployment metrics to historical baselines from the same population, which means confounding factors (seasonal variation, staffing changes, other process improvements) could account for some of the change.
- Medication adherence (PDC from pharmacy data): Proportion of days covered improved from about 67% at baseline to 89% at 8 months. This crossed the HEDIS 80% threshold that affects quality bonus payments. The 95% figure comes from self-reported portal data, which, as noted above, is inflated.
- No-show rate: Dropped from 23% to around 8%. Multi-channel reminders with escalation were the primary driver.
- 30-day readmissions: Declined by roughly 25-30% for heart failure and COPD patients. Attributing this directly to the engagement system is difficult because the organization also made other care management changes during this period.
- Care coordinator case load: Each coordinator managed around 320 patients, up from 180. The automated engagement handled routine touchpoints, letting coordinators focus on complex cases.
- Portal MAU: 72%, compared to the industry average of roughly 20%. We believe this is primarily due to the simplified home screen and the daily medication confirmation habit loop.
What We Would Do Differently
We should have integrated pharmacy fill data before building any engagement logic. The first two months of the project relied on self-reported adherence from the portal, which overstated actual adherence by 18%. This meant the risk model was trained on noisy labels initially, and its accuracy improved substantially once pharmacy data was available. If we were starting this project again, pharmacy integration would be sprint one.
We also underinvested in the care coordinator dashboard early on. The initial design prioritized the patient-facing features, but coordinators are the operational backbone of engagement programs. They are the fallback when automation fails. Getting their tools right earlier would have shortened the time to full adoption.
Healthcare Engineering
Building Patient Engagement Infrastructure?
We have built multi-channel notification systems that integrate with EHRs and pharmacy networks. If you are working on a similar problem, we are happy to share what we have learned.
Get in Touch