Artificial Intelligence in Medicare Audits: Part I

CMS launches healthcare outcomes challenge.

Expect more artificial intelligence (AI) in healthcare in 2020. We will see AI used primarily in diagnostics and auditing. In each of these areas, AI promises to impose drastic changes on society. As these changes reverberate through organizations, old work patterns will be disrupted. In addition, legal and regulatory protocols will be forced into constant revision, as responsibility for decision-making and the liability that comes with it shifts from people to machines.

This will raise two important issues. First, who will take responsibility when harmful mistakes are made? Second, how can organizations victimized by machine-driven auditing defend themselves against faceless algorithms?

The AI Market is Growing Rapidly
The AI market is growing quickly. A market of only around $16 billion in 2017 should reach almost $200 billion by 2025. This year, it should pass the $80 billion mark. Some have placed the CAGR at almost 40 percent. Capital investment is pouring in at the rate of more than $2 billion per quarter, and new AI chipsets are entering the market.

We know that exciting growth projections such as this should be taken with a grain of salt. Definitions on what constitutes a “market,” and what type of programming actually is “true” AI may provide some fuzziness to the precision. But even accounting for the inevitable errors that accompany any approximation and claims of seeing into the future, still we can see that AI is one of the major technology forces that will shape this decade. In the 21st century, we might say the smartphone shaped the first decade, social media the second, and AI will shape the third.

AI is Providing Proven Benefits in Healthcare
In the healthcare sector, it long has been a dream of researchers to replace doctors with computers. The drive behind this was not so much anti-doctor, but rather the desire to harness technology, especially telecommunications, to disperse the benefits of medical knowledge throughout the world, to places where there were no doctors available. This thinking started in the 1980s, and the term used was “telemedicine.” 

Progress has been rapid. Google’s DeepMind recently published a paper in Nature indicating that AI can be used to predict acute kidney injury (AKI) before it happens.

The Federal Drug Administration (FDA) has been supervising the testing of AI systems in healthcare, and has been approving systems that meet its tough standards. AI systems now can read and interpret medical images. For example, there are currently AI systems that can help doctors interpret MRIs of the heart, CT scans of the head, and photos of the back of the eye. The AI systems are not replacing doctors, but are speeding up their work by making initial interpretations and making recommendations that then are ratified by a real doctor. Other AI systems have been approved that can read mammograms and identify breast cancer.

FDA approvals have increased steadily. One algorithm was approved in 2014, four in 2016, and five in 2017. In 2018, eleven were approved, including the first AI for medical diagnosis that requires no input from a human clinician.

Most of the FDA sign-offs have been for use of AI in radiology, where 22 systems have been approved. Cardiology follows with 12 approvals. Other areas of AI work include oncology with five, endocrinology with six, and psychiatry with four. Those areas are the leaders in application of AI in medicine. The rest of the approvals granted by the FDA are distributed across geriatrics, neurology, orthopedics, emergency medicine, ophthalmology, and pathology.

AI groupies chirp about a bright future. They paint a rosy future in which mundane medical chores are done by algorithm. Doctors will get more time to spend with patients. Life will be good.

Is there a Dark Side to AI?
Some predictions are rosy, others are cautionary. Some worry that when one takes the human out of decision-making, one also takes out the humanity. Other observers worry about massive unemployment effects, particularly amongst the ranks of white-collar employees. Anyone visiting the remarkable beacon of innovation, the AmazonGo stores, can see the future. There are no cashiers. The only persons allowed to enter the store are ones with a smart phone running the Amazon app linked to a credit card. Only one or two persons are present stocking the shelves. It is not too far-fetched to assume they too soon will be replaced by robots.

The United Nations First Committee on Disarmament is worried about the development and proliferation of AI-powered “autonomous weapons systems,” which will use algorithms will decide whether or not to pull the trigger.

AI sometimes makes mistakes. A diagnosis might change depending on irrelevant factors, such as the brand of the MRI machine.

What to Expect with AI in 2020?
Look for more government regulation of AI in 2020. We can safely predict that some state governments will restrict how businesses may use AI. Recent complaints about some healthcare algorithms have charged that they have bias. We can detect an emerging consensus that AI must be “fair” to everyone. We can safely predict that this standard will lead to a new wave of litigation, including class-action suits against algorithms.

Another AI trend to watch is the Centers for Medicare & Medicaid Services (CMS) AI Health Outcomes Challenge. Given the alleged losses of $21 billion per year in waste and fraud in Medicare, CMS has been instructed through an executive order to investigate the use of AI for auditing. Late last year, CMS issued an RFI asking for help. The goal is to change the “pay and chase” model of auditing to a predictive system. Rather than employing the old system of using algorithms to comb through data ferreting out fraud and abuse, CMS wants AI to prevent making bad payments in the first place.

As part of the Challenge, CMS recently selected 25 participants. Some are from the entrenched Washington Beltway aristocracy: Booz Allen Hamilton, IBM, Northrop Grumman, etc. These are major systems integrators with legendary technical skills. The participants also include a few from academia, including the University of Virginia, Northwestern, and Columbia University. If the technology is developed, then deep learning and neural networks will be used to predict adverse events and unplanned admissions to skilled nursing facilities and hospitals. According to the schedule, seven finalists will be announced in April. In September, a winner will get $1 million in seed money. Stay tuned in 2020 for more news on these developments.

AI is the future of auditing, healthcare, and many other aspects of society. You may wish to pull out that old copy of 2001: A Space Odyssey, and reacquaint yourself with HAL 9000.

Programming Note: Listen to Edward Roche report on use of AI in Medicare audits during the next live edition of Monitor Mondays, Feb. 10, 10-10:30 a.m. EST.

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

You May Also Like

Leave a Reply

Please log in to your account to comment on this article.

Subscribe

Subscribe to receive our News, Insights, and Compliance Question of the Week articles delivered right to your inbox.

Resources You May Like

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

Happy World Health Day! Our exclusive webcast, ‘2024 SDoH Update: Navigating Coding and Screening Assessment,’  is just $99 for a limited time! Use code WorldHealth24 at checkout.

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →