Leveraging Technology in Clinical Documentation Integrity

Leveraging Technology in Clinical Documentation Integrity

Happy New Year.

This holiday season provided me with the opportunity to catch up with
some of my former colleagues who helped shape my career in Clinical
Documentation Integrity (CDI). The nostalgia of our time together made me
reflect upon the ways in which CDI and coding have changed thanks to
technology.

One of the most significant changes since I started in CDI was the
transition to the electronic medical record (EMR). The EMR fixed some
problems like legibility and access to the health record (I mean tracking
down the health record to see if the provider had written their note) but
created a lot of other issues like copy and pasting as well as record bloat.
Additionally, the focus of the electronic record was patient care, not hospital
business functions to many of our processes remained manual or suffered
from a lack of integration with other revenue cycle departments and,
sometimes, even the clinical record. Some would argue that EMRs created
more problems that it fixed as it resulted in the use of problem lists and so
many data. It has also made it more difficult to know what is and what is not
provider documentation that can be used for the purpose of code
assignment. Context is more important than ever. Highlighted words that
can be mapped to a diagnosis code do not already equal a reportable
diagnosis.

Another notable change since I started in CDI is the proliferation of digital
tools like artificial intelligence (AI). A article published in HFMA reported 63
percent of healthcare organizations have adopted AI tools to support
revenue cycle functions. Another article from Newswire summarized
findings from the recent Black Book survey that found 62 percent of
respondents planned to automate CDI and coding functions. Additionally,
64 percent of respondents expect AI to reduce staffing needs by at least 25
percent in these workstreams. Before anyone begins to panic about the
future of the CDI and coding profession, most of these cuts are expected to
be absorbed by reducing dependence on outsourced revenue cycle
management (RCM) functions, not staff reductions within the facility itself. It
is also worth noting that we are facing a shortage of both qualified inpatient
coders and CDIs. This shortage is expected to grow as many of us are
near retirement or already retired and working part-time.

Although surveys continue to demonstrate optimism among healthcare
leadership about the potential impact of digital tools, there remains limited
objective data beyond surveys. The same Black Book survey found 76
percent of organizations piloting AI report equal or better “coder-level
quality” verses human-only baselines. I have to believe this statistic is
based on outpatient coding rather than inpatient coding, but it is not
qualified within the article. I agree that hospitals are having more success
implementing automation in hospital outpatient and professional coding
than inpatient coding. AI tools can be a great resource to reduce or
eliminate manual tasks as long as humans can influence the parameters to
ensure the desired outcome is achieved.

In contrast, Fierce Healthcare published an article with a different
perspective, which is also reflected by an increasing number of healthcare
leaders who directly interact with teams who use AI tools. Their perspective
is current create an “experience that looks slick but isn’t actionable,”
resulting in inefficient workflows, redundancy, staff frustration, low
conversion rates, and unused capacity. As an industry we need objective
data and better feedback systems with guarantees from AI vendors for
continuous improvement. If you did not know, most technology companies
release products when they reach minimum viability, in other words, when
they are good enough, but flaws are likely to still be present.

Historically, hospitals have failed to independently define and track
measures of success when it comes to implementing AI tools. However,
that may change in 2026. A couple of articles recently published by
Becker’s explore how healthcare leaders are raising their expectations
when it comes to digital revenue cycle tools. These tools are so expensive
that they must deliver value to the organization (return on investment).
Healthcare leaders should be more disciplined about defining success
when evaluating digital tools. They should create metrics that reflect
problem resolution. In other words, does the tool address the problem? If
the problem is staffing, is the same quality and quality of work being
produced with fewer staff or is more work of the same quality being
produced since implementation of the tool. Of course, the organization
would also need to define quality. Successful digital tools will produce
actionable data that allows the organization to pivot as solutions to the
problem are identified.

Let us consider case prioritization. Technology has definitely made it easier
to create work assignments since it can integrate real-time census.
However, once the worklist is developed is the review process, itself,
efficient? Do the CDI staff believe the technology is prioritizing the right
cases? What determine which cases are the “right” ones? How flexible are
the settings for determining cases for review? As organizations demand
more and more from CDI departments, AI tools must be flexible enough to
meet these ever-changing demands. Successful digital tools should reduce
human touches (the number of clicks it takes to perform a task) and rework,
referred to as workflow friction. However, my experience has been the
introduction of AI tools have led to more “backend” CDI processes.

Healthcare executives are realizing that they should demand AI revenue
cycle workflows that successfully produce clean claims with lower denial
rates, higher revenue, and improved quality metrics. Is that too much to
ask? And this should all occur without requiring additional backend
workflows to fill “gaps” missed by the tool. The reality is that most hospital
revenue cycle departments have inefficient workflows. The business of
healthcare is often an afterthought as hospitals focus on delivering high
quality patient care. For example, the business health record often lacks
key functionality to support integrated RCM functions. Many RCM teams
rely upon standalone solutions that do not integrate with other tools to
provide a wholistic perspective that would allow leadership to better identify
opportunities for improvement.

Although there is much excitement around generative AI, I remain skeptical
about its effectiveness as a CDI and coding tool. One of the topics that
came up as I chatted with my former colleagues was if I thought AI would
replace CDI and coding professionals. No, I do not. Generative AI is able to
create original content and simulate human-like creativity, but it lacks
context. It can string data together but without understanding the context of
the healthcare encounter it may arrive at an incorrect or incomplete
conclusion. Additionally, AI can only reference existing data. There is a lag
between when new codes are implemented or when new coding advice
becomes available and when it can be included in a training set and
integrated into the model, which can lead to errors.

Generative AI, like all other types of AI, requires large sets of data for
training. The amount of data needed is so large that it is impossible for a
company to validate the accuracy of the data being used to train the model.
Hence, garbage in, garbage out. Most models are also dependent on
patterns; however, health records have a lot of variability across providers,
across hospitals, across regions, etc. A simple example is that as an
industry there are few definitive, agreed upon diagnosis definitions. Each
patient will have a variable presentation, and each provider will have a
different threshold of how much data they need before diagnosing or
intervening, so it may be difficult for a distinct pattern to be discerned by the
tool.

Human intervention is necessary with RCM digital tools because of the
nuances associated in our field. As CDI and coders interact with digital
tools, they should be determining if the output is congruent with the totality
of the medical record. AI is looking at a point of data in isolation without
context, we as professionals need to add the context to determine if the
recommendation is accurate or not. I believe this ensures our job security.

Facebook
Twitter
LinkedIn

You May Also Like

Leave a Reply

Please log in to your account to comment on this article.

Subscribe

Subscribe to receive our News, Insights, and Compliance Question of the Week articles delivered right to your inbox.

Resources You May Like

Trending News

Prepare for the 2025 CMS IPPS Final Rule with ICD10monitor’s IPPSPalooza! Click HERE to learn more

Get 15% OFF on all educational webcasts at ICD10monitor with code JULYFOURTH24 until July 4, 2024—start learning today!

Unlock 50% off all 2024 edition books when you order by July 5! Use the coupon code CO5024 at checkout to claim this offer!

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 1 with code CYBER25

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 2 with code CYBER24