National Coding Contest Indicates Outpatient Coding is Getting Worse, not Better

More than 4,000 cases were coded in the 2019 contest focused on outpatient coding.

ICD-10 is well-established, and we are already discussing and planning for ICD-11. However, where are the long-anticipated and promised increased accuracy and most definitive diagnoses? At least one coding contest found outpatient coding in 2019 had the worst accuracy to date. How can that be true? Is suboptimal coding creating a trickle-down effect seen in audits, compliance, payments, and payor findings? 

ICD-10 has enormous specificity that apparently has not been captured as anticipated. Prior to implementation, it was believed to be the pathway to better and more accurate coding, disease management, epidemiology, causes of morbidity and mortality, and robust healthcare statistics. Enormous amounts of time are focused on clinical documentation improvement. Coders have intensive training. So why have the expected improvements not been realized?

Central Learning has held an annual coding contest, beginning in 2016. Overall accuracy trending results were 38 percent, 41 percent, 42.5 percent, and 40.4 percent, chronologically. Coders self-designate their area(s) of expertise and years of experience. Actual redacted medical records were relied upon for the contests. 

Due to the significant increases in outpatient services and shifts in payment methodologies, the 2019 contest focused on outpatient coding. More than 4,000 cases were coded. A total of 81 percent of the contestants were American Health Information Management Association (AHIMA) and/or AAPC  certified. A total of 82 percent of the contestants had more than five years of coding experience. The surprising finding was an accuracy rate of 60.5 percent for primary diagnoses, and only 38.6 percent for secondary diagnoses. Analysis indicated a lack of specificity in the primary diagnosis and failure to follow coding principles and guidelines in secondary diagnoses.

As I pondered these findings, so many questions arose. Are the coding tools relied upon by contestants faulty, a Herculean task to navigate, or lacking the coding guidelines? In our business, we see significant volumes of incorrect coding in data feeds every single day. Were records incomplete, missing information, or lacking sufficient detail? We know many often are. Are bad habits ingrained when coders are pressured to code what they have and not “bother” the physicians for clarification? Are production benchmarks too high to allow time to maximize coding accuracy? Are coders used to (or trained to) rely upon electronic health record (EHR) code assignment, a method fraught with errors? Whether or not any of these questions actually apply to the contest, they are certainly facts in everyday billing and coding.

Much has been published about “surprise billing,” and new state laws are the norm, pending federal legislation. While the focus has been on out-of-network physicians, the fact is that outpatient diagnosis coding has a direct bearing on insurance coverage. Is one of the identified errors, lack of accurate secondary diagnoses and the etiology or reason for the visit, also a cause of surprise bills?

We know most entities and providers are experiencing significant increases in medical record requests. Could the coding contest findings be a major contributor to that trend? Incomplete accident or injury information, lack of specificity, and CPT versus diagnosis apparently mismatching all can contribute to record requests. Likewise, diagnoses that do not appear to support the level of a visit are always a red flag. As more payors use proprietary analytics to evaluate medical necessity, the most accurate and complete diagnosis coding is of the utmost concern.

The Central Learning annual contest continues to raise serious questions that should be given a thoughtful consideration. How and why are certified, experienced coders, with specialty expertise, unable to knock it out of the park? 

In addition, this raises questions that are more global. Coding does not occur in a vacuum. Is provider documentation that bad, in spite of all the clinical documentation improvement efforts? Are production benchmarks too high to have the maximum coding specificity? Are coding tools antiquated or scaled down to the point that they no longer accurately represent the authoritative coding guidelines and conventions? Have we fallen into the trap of relying too heavily on technology instead of thinking humans to do the heavy lifting? Are our coders divorced from denials and the outcomes of medical record review requests? Do patient complaints about surprise bills that result from poor documentation and/or suboptimal coding get openly shared with those involved?

In my humble opinion, this is not just a coding contest accuracy question. That is one piece of a very big picture. Let us look at the whole picture, because something is wrong if this is the best we can do, four years after ICD-10 implementation.

Programming Note:

Listen to Holly Louie report this story live today during Talk Ten Tuesday, 10-10:30 a.m. EST.

https://www.centrallearning.com/wp-content/uploads/2019/09/2019-CL-results.pdf

 

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

You May Also Like

Leave a Reply

Please log in to your account to comment on this article.

Subscribe

Subscribe to receive our News, Insights, and Compliance Question of the Week articles delivered right to your inbox.

Resources You May Like

Trending News

Happy World Health Day! Our exclusive webcast, ‘2024 SDoH Update: Navigating Coding and Screening Assessment,’  is just $99 for a limited time! Use code WorldHealth24 at checkout.

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →