Emerging Legal Defenses Against AI-Assisted Medicare Audits: Part I

Emerging Legal Defenses Against AI-Assisted Medicare Audits: Part I

As we know, artificial intelligence (AI) is in the spotlight these days. It is being adopted in almost every sector of the economy. It is a commercial product, and the salespeople all argue that AI promises to yield almost incredible improvements in efficiency and accuracy in fields as diverse as supply chain management, screening candidates for HR departments, driving vehicles, and even targeting of humans for bombing.

AI already has stimulated much thought on how it will change the economics of healthcare. In Yu, et. al., Table 3 and Figure 3 show how AI can be inserted into the space where human evaluations by a provider take place.[1] The goal is a fully automated clinical system in which the clinician is not present.

In such a situation, who is responsible if a mistake is made? Another question is: how could one even determine if a mistake has been made? After all, we can be sure that in the software agreement between the company that wrote the software and the user there is going to be a large amount of language meant to insulate the software writer from any liability. If this is the case, then who could be held responsible for a mistake? Who would be the defendant in a medical malpractice suit?

If the defendant is a machine or a piece of software, then who is going to foot the bill for the damages? And what would happen if, as a prerequisite for receiving treatment, the patient has signed an agreement giving immunity to any decisions made by a piece of software? That is, what if the patient has assumed the risk by using the software?

AI in Medicare Claim Auditing

This technology also is being applied in auditing. The use of AI in auditing of Medicare, Medicaid, or other insurance-related claims submitted by healthcare providers promises to do several things. First, it can greatly increase the amount of information that can be subjected to analysis. Second, AI can greatly increase the speed at which auditing can take place. Third, AI should be able to radically reduce the cost per audit.

AI Will End Sampling and Extrapolation

What about sampling and extrapolation, one of the glittering icons of unfairness and corruption in auditing?

Of course, the entire idea of statistical sampling is based on the striking of a balance between the cost of auditing and the benefits that might be obtained by the auditor. It is recognized, the logic goes, that auditing is expensive since experts are employed to do the work. Therefore, the number of claims that can be audited is considered to be limited by the inherent cost of deploying humans with those capabilities.

As a result, auditors take samples of claims, do their analysis, and then make a statistical extrapolation to figure out how much money to demand in the recoupment. It’s an old story, and an entire industry has been built around this approach.

But if AI can be used to increase the amount of claims analyzed by thousands or tens of thousands or even hundreds of thousands of times, then it follows that there is no need any longer to employ sampling at all. It is just as simple and probably even cheaper to analyze all claims, every single claim, all with a reasonable level of accuracy, rather than wasting time picking the right sample size, determining the right distribution of the variable being estimated, gauging the overpayments, and then using the right set of formulas. Such cumbersome and error-prone work in common use today is notorious for unreasonable shortcuts and unreliable results. 

So, a logical result of using AI will be a complete elimination of statistical sampling and extrapolation from the auditing process. This will mean that the entire machinery of statistical and legal experts working on behalf of the tortured provider, who make their money by pointing out the deficiencies of the statistical work, all will be made completely obsolete. They will be out of a job.

The Current Transitional Period to AI Auditing

Nevertheless, we can assume that such a transformation – a so-called “AI revolution in auditing” – will not come quickly, and in fact, will take years. As a cynic might say, “Foolproof Medicare auditing is an industry of future, and always will be.” In the meantime, we will be stuck with a dual system in which the entire machinery that has been working in the past will remain in place, and AI will be added as an additional layer on top of the entire machinery, including its extra cost.

How will this come about?

It will likely mean that AI will be used as a supplementary tool for the time being. It will aid in incrementally improving the actions taken in our legacy auditing system. In this connection, we can expect that AI will be used to generate the initial results of an audit and then, as a form of computer-generated fiction, hand it over to the human auditors still at their desks, who in turn will review the work of the AI and sign off on it. 

This signature certification will provide the credibility for the work of AI. In other words, humans will be in the background, signing off and validating the work of AI, when in fact, they are much less competent than the AI itself, and cannot possibly understand how the AI came to its conclusions.

Human Certification of AI Audits is a Fiction

During this transitional period, the practice of using humans to sign off on or validate or certify the work of AI is a fiction. Here is the reality: if an auditor signs off on the work of AI in auditing, then they will be committing a type of fraud, because it is impossible for the human to review all of the information that has been analyzed by the AI.

Why? Because if the human seriously were to audit the work of the AI, then doing so would be time- and cost-prohibitive, and destroy the entire underlying economic value of using AI in the first place. In fact, it likely is not possible even to obtain a record of what information was actually considered by the AI when making its decision. And what would happen if the human found themselves in disagreement with the AI?

The research thus far indicates that using auditors to sign off on the results of an AI audit presents a number of problems. For example, in one study, it was found that “paraprofessional auditors lack specific expertise and credentials to conduct data-driven audits, apply judgment in deference to technology, and disregard the impact of AI-driven decisions on the public interest.”[2]

Of course, for every claim reviewed, there will be some type of written comment made by the AI, but much of it is going to look like a simple process of cloning, unless the AI is instructed to introduce small changes into the writeup of claims so that the reality of cloning and its decision-making is disguised to make it appear that a unique type of reasoning was applied for each claim analyzed. Another layer of complexity in this intricate fiction.

It always has been amazing to me, on a personal level, that auditors who have very little training, particularly compared to the years of training that must be endured by a physician, nevertheless are able to zoom in and rapidly make decisions about claims and their validity without seeing the patient or being present at the time services were delivered. They base their work on merely supposing what has happened based on the records, which are little more than a brief artifact of the healthcare experience.

And the same people are able to reject the medical recommendations of healthcare professionals who have spent years in the business and spent many more years and lucre in obtaining requisite training and learning their trade.

In Part II of this series, we will examine machine learning, AI, and the use of algorithms in auditing.


[1] Kun-Hsing Yu, Andrew L. Beam and Isaac S. Kohane, Artificial Intelligence in Healthcare, 2 Nature Biomedical Engineering, 719-731 (Oct 2018)  https://drive.google.com/file/d/10U3KnmNY8lgQk3GXn14sMv7Tp4wsxk-n/view 

[2] Koreff, Jared, Lisa Baudot, and Steve G. Sutton. “Exploring the Impact of Technology Dominance on Audit Professionalism through Data Analytic-Driven Healthcare Audits.” Journal of Information Systems 37, no. 3 (2023): 59-80. https://digitalcommons.trinity.edu/cgi/viewcontent.cgi?article=1185&context=busadmin_faculty

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

You May Also Like

Leave a Reply

Please log in to your account to comment on this article.

Subscribe

Subscribe to receive our News, Insights, and Compliance Question of the Week articles delivered right to your inbox.

Resources You May Like

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

Happy World Health Day! Our exclusive webcast, ‘2024 SDoH Update: Navigating Coding and Screening Assessment,’  is just $99 for a limited time! Use code WorldHealth24 at checkout.

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →