Next Best Action Recommender
Accelerating revenue for a large RCM company by improving operator decisions
Generative AI
Recommendation Systems

About the Company

UnisLink is a market leader in transforming revenue cycle operations for independent physician practices. By integrating revenue cycle intelligence, state-of-the-art technology, and a high-touch approach into a unified platform, UnisLink enables healthcare providers to maximize their reimbursements. Their full-spectrum, specialty-specific services include credentialing, payer administration, eligibility verification, claims processing, denials management, patient engagement, and more.

With a strong operational team of nearly 1000 agents handling over 100,000 claims daily, UnisLink is committed to bringing efficiency, accuracy, and predictability to healthcare revenue operations.

Problems We Solved

1. Scaling Expertise for Complex Claim Resolution

UnisLink processes high volumes of claims that require understanding of:

  • 70,000+ diagnosis codes
  • 10,000+ CPT codes
  • 500+ insurance providers with varied rules
  • 300+ client-specific protocols
  • 10+ disparate data systems

The training curve for agents was long due to the complexity of claim permutations, worsened by high attrition. This led to:

  • Delays in claim resolution
  • Inconsistent agent performance
  • Revenue loss due to suboptimal resolution
  • Wasted or duplicated agent effort

2. Unusable Agent Comments Due to Lack of Structure

Agents routinely log notes on claim actions, blockers, and next steps. These comments, however, were:

  • Unstructured and riddled with abbreviations and typos
  • Not machine-readable
  • Sometimes visible to patients but unreadable

This hindered audit trails, automated insights, and clear communication with patients.

3. Suboptimal Allocation of Incoming Claims

UnisLink handles over 600,000 claims every week. Some claims:

  • Don’t require human intervention
  • Are similar to previously resolved claims
  • Could be fast-tracked or deprioritized based on historical data

However, due to lack of smart allocation logic, agents spent time on predictable or known scenarios, leading to inefficiencies.

Approach & Technical Challenges

1. AI-Powered Claim Resolution Recommendation Engine

Solution:

An LLM-powered recommendation engine provides claim agents with context-specific action suggestions based on:
  • Diagnosis and procedure codes
  • Insurance and practice metadata
  • Patient (PHI-redacted) information
  • Historical claim activity
Key Technical Challenges & Solutions:
  • Industry alignment: LLMs were enhanced with RCM-specific knowledge via prompt engineering.
  • Compliance: PHI redaction pre-LLM and secure re-insertion post-processing.
  • Azure rate limits: Managed via a queue-based throttling system.
  • Guardrails: Built to prevent hallucinations and inappropriate responses.

2. AI-Based Enhancement of Agent Comments

Solution:

A prompt-tuned LLM enhanced agent-written comments to ensure:
  • Readability
  • Accuracy of actions described
  • Grammar and spelling correction
  • Expansion of domain-specific abbreviations
Key Technical Challenges & Solutions:
  • PHI Safety: Redacted before enhancement, restored post-enhancement with observability.
  • High Variance Load Handling: Multithreaded serverless architecture optimized for concurrent processing.
  • User Experience: UI messaging built to mask latency (3–4 seconds) and keep users engaged.

3. Intelligent Claim Allocation via Historical Nearest Neighbour Matching

Solution:

We developed a categorical nearest neighbour engine to:
  • Compare new denied claims with historically paid ones
  • Score similarity based on practice, payer, diagnosis, procedure codes
  • Auto-validate eligibility, credentialing, and POS checks
  • Enable fast-track resolution for repeat scenarios
  • Deprioritize claims likely to self-resolve
Key Technical Challenges & Solutions:
  • High-speed Processing: Reduced claim matching time from 10s to 0.25s via vectorized algorithm.
  • Scale Management:
    • Async queue-based architecture
    • Parallel processing across 300+ practices
    • Optimized SQL stored procedures and batched serverless compute
  • Error Resilience:
    • Managed message lock durations
    • Parallelized DB reads/writes
    • Memory-aware batch execution

Impact

Claim Recommendation Engine

Preliminary UAT-phase results:

  • +3% increase in collection
  • +12% improvement in liquidation
  • +3% faster month-on-month collection

Comment Enhancement Tool

  • Structured and readable logs for internal audit and patient communication
  • Reduced cognitive load on agents, improved communication quality

Claim Allocation Optimization

  • Prioritized high-impact claims
  • Avoided unnecessary agent effort on predictable claims
  • Accelerated throughput without adding headcount

Tailored AI Branding

Transform your operations, insights, and customer experiences with AI.

Ready to take the leap?

Get In Touch