Avoiding AI-Driven Upcoding: How Providers Can Embrace Innovation Without Inviting Audits

Avoiding AI-Driven Upcoding: How Providers Can Embrace Innovation Without Inviting Audits

The future of healthcare is undeniably intertwined with artificial intelligence (AI). But for all its promise, AI has become a double-edged scalpel – especially when it comes to billing in federal healthcare programs.

A recent Reuters Legal News analysis highlighted a $23 million False Claims Act (FCA) settlement by an academic medical center, related to an automated coding system improperly assigning CPT® codes to emergency department visits. The U.S. Department of Justice (DOJ) alleged that this led to overpayments from Medicare and Medicaid. The case underscores a growing regulatory theme: AI-assisted “upcoding” – when AI assigns codes that don’t match the documentation – isn’t just a software glitch. It’s a compliance risk.

“Although the focus of President Trump’s Jan. 23, 2025 AI executive order is primarily on removing barriers that inhibit AI growth,” the analysis read, “DOJ and whistleblowers can be expected to monitor at least for traditional concerns that AI is resulting in over-billing.”

AI + Coding = A Risky Romance

Let’s start with the basic tension: AI is great at speed, but not at nuance. When applied to documentation or coding, natural language processing (NLP) and machine learning tools can misinterpret a physician’s notes or assign codes based on incomplete context. In many cases, AI may infer services or diagnoses that were implied but not explicitly documented, thereby inflating claim values.

This is especially tempting in evaluation & management (E&M) coding, where subtle language shifts – like “reviewed by physician” versus “performed by physician” – can change billing levels. If an AI system “reads between the lines” and bumps the level of service without appropriate justification in the record, it becomes textbook upcoding.

Tips to Stay Compliant While Using AI

So, how can providers enjoy the efficiency of AI tools without triggering FCA liability?

1. Human-in-the-Loop Review

No matter how “smart” the system, the final coding decision should rest with a qualified professional. Providers should:

  • Require coder review of all AI-suggested codes;
  • Use audit trails to show who accepted or overrode AI decisions; and
  • Clearly log any edits or justifications made during review.

This not only prevents overreliance on automation, but gives the organization legal defensibility during a post-payment review.

2. Train AI with Quality Data

Your AI model is only as accurate as the data it learns from. Be wary of:

  • Pre-trained vendor models based on generalized or proprietary datasets;
  • Systems that aren’t regularly updated with current CPT/ICD codes; and
  • Models that learn from your organization’s own historic errors.

Instead, work with vendors that allow transparent retraining, ongoing updates, and clinician review of AI logic.

3. Ensure Documentation Integrity

AI should never guess. If a note doesn’t document a procedure or diagnosis, AI shouldn’t invent one. Train staff to recognize that coding must match exactly what’s in the medical record – and nothing more.

Create internal policies that require:

  • Verifiable linkage between documentation and each CPT/HCPCS/ICD-10 code;
  • Periodic compliance audits comparing AI-generated coding to source notes; and
  • Provider sign-off on AI-generated electronic health record (EHR) text before it becomes billable.

4. Limit Use of Predictive Prompts

Some AI systems now offer predictive prompts, suggesting diagnoses or codes based on phrasing. These can be useful tools, but also a trap if providers start confirming suggested codes that don’t align with their clinical assessment.

Establish boundaries around prompt use, and train providers to ignore or remove inaccurate suggestions before finalizing documentation.

5. Monitor AI Impact on Billing Patterns

If your AI system leads to a measurable increase in your average case mix index (CMI) or E&M levels, that may attract payor scrutiny. Use dashboards to:

  • Monitor coding distribution trends before and after AI implementation;
  • Flag outliers for peer review; and
  • Compare AI-influenced results to manual benchmarks.
Final Thoughts: Compliance Can Still Be Cool

AI is here to stay. But its use in coding – particularly in federal programs like Medicare and Medicaid – comes with an evolving set of expectations. As regulators, whistleblowers, and plaintiffs’ attorneys begin to scrutinize algorithmic influence, providers need a clear strategy to ensure AI remains a support tool, not a scapegoat.

EDITOR’S NOTE:

The opinions expressed in this article are solely those of the author and do not necessarily represent the views or opinions of MedLearn Media. We provide a platform for diverse perspectives, but the content and opinions expressed herein are the author’s own. MedLearn Media does not endorse or guarantee the accuracy of the information presented. Readers are encouraged to critically evaluate the content and conduct their own research. Any actions taken based on this article are at the reader’s own discretion.

Facebook
Twitter
LinkedIn

You May Also Like

Leave a Reply

Please log in to your account to comment on this article.

Subscribe

Subscribe to receive our News, Insights, and Compliance Question of the Week articles delivered right to your inbox.

Resources You May Like

Trending News

Prepare for the 2025 CMS IPPS Final Rule with ICD10monitor’s IPPSPalooza! Click HERE to learn more

Get 15% OFF on all educational webcasts at ICD10monitor with code JULYFOURTH24 until July 4, 2024—start learning today!

Unlock 50% off all 2024 edition books when you order by July 5! Use the coupon code CO5024 at checkout to claim this offer!

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 2 with code CYBER24