Artificial vs True Intelligence

As Artificial Intelligence AI tools flood the market, physician efficiency and document clarity have been sacrificed in exchange for EHR big data input and a workflow designed to prompt revenue cycle triggers. Between under-documented cases caused by time-consuming document creation workflows or note bloat caused by overuse of copy and paste and generic content generating tools, too many of today’s records do not deliver clear condition specific details for optimum patient care let alone accurate reimbursements.

 

Earlier this year, HHS and CMS hosted a symposium in Washington DC to address this growing problem. Promoted as a public meeting on Reducing Clinical Burden, the meeting focused on SEC. 4001 of the 21st Century Cures Act which calls for assisting doctors and hospitals in improving quality of care for patients and the growing problem of documentation overload.

 

Highlighting these concerns, a September 2017 study by The University of Wisconsin and the American Medical Association AMA noted that physicians work an average of 11.4 hours per day when on duty, with 5.9 hours spent directly engaged with their EHR. The negative impact of this data driven workload has been qualitatively voiced in numerous articles, including a May 16th, 2018 New York Times Magazine editorial titled ‘How Tech Can Turn Doctors into Clerical Workers,’ among others.

 

Although there will always be a balance between technology and human effort to create and process medical records, to date, there has been few efforts to quantify the impact of automation on quality. There has also been no effort to quantify the complete financial impact of AI tools often cost justified with labor eliminating bullet points that focus on extremely limited process steps.

 

Defining the Problem

 

The AHIMA standard on document quality is titled ‘Clinical Documentation Quality Assessment and Management Best Practices.’ These standards equally apply to reports generated through traditional means (dictation/transcription), fully physician centric means (front-end speech, self-type, and/or point and click), or some combination (partial dictation).

 

There have been two independent peer reviewed studies that analyze the quality impact of physician centric workflows with both reflecting parallel results. ‘Error rates in physician dictation: quality assurance and medical record production’ appeared in the International Journal of Health Care Quality Assurance in November of 2014, and ‘Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists’ published in the Journal of American Medical Association in July of 2018. Both show when transcriptionists or speech recognition editors are involved, the signed document error rate is 0.3%. Without them, error rates averaging over 7% are pushed into the EHR.

Although front-end speech vendors tout the AI improvements to their products, such as warnings when gender or other patient reference inconsistencies are recognized during document creation, the cause of most physician centric workflow errors is simply time, specifically the time needed to simultaneously dictate, read and edit documents before signing, which too often results in physicians not proofing their own work before electronically signing. This thought process change from dictating a note from start to finish and later reviewing a complete document to simultaneously speaking, reading and editing while interacting with a structured user interface inherently alters the content contributed by physicians.

 

Looking beyond the personal financial loss of physicians who have reported losing as much as $100,000 per year for some specialties due the increased time spent with the computer instead of patients (you have to imagine how much that affects the per-provider gross revenue potential of healthcare organizations), the real damage comes from the abbreviated, bloated or simply erroneous patient encounter details generated on the front end of the documentation/revenue cycle process.

 

As physicians are overwhelmed with data entry tasks and pressed for time, the comprehensiveness of the encounter details to support ongoing care suffer in exchange for EHR record big data content capture and a workflow focused on revenue cycle triggers. The elephant in the room with that shift are the lost patient encounter nuances that directly impact patient care, downstream coding efforts, and ultimately, truly justifiable billing amounts per claim.

 

Of course, vendors argue CDI or note reader programs are the fail-safe backdrops to ensure content integrity, but those programs are inherently restricted by the minimized generic input provided. Garbage in, garbage out, even if it’s well organized and measurable garbage. Furthermore, CDI notification fatigue only adds to notification overload, increased training, and additional HIM support efforts to resolve issues (in addition to physician burnout) whereas better content on the front-end minimizes those issues and restores richer content that positively impacts patient care and justifiable revenue.

 

How We Got Here


Certified EHRs fundamentally transformed the document creation process from a somewhat free form narrative effort (there were work-type templates to ensure consistent and comprehensive content) to spreadsheet generation tools. This congressionally mandated objective shift was required to demonstrate Meaningful Use of captured big data in hopes of impacting overall population health. This is also when physicians and HIM professionals lost their pragmatic voice of reason in favor of IT designed workflow.

Given the federal EHR mandate, per-provider reimbursement incentives, vendor assurances of preprogrammed quality, and HIM labor reductions in favor of automation tools, it was inevitable CFOs and CEOs would shift the documentation workflow scheme responsibilities away from HIM and to the CIO.

 

Of course, once systems were installed, HIM was called upon to fix (or blamed for) the resulting revenue cycle gaps that occurred. Sure, there was extensive testing to make certain documents created through the EHR environment would generate equivalent revenue if the same cases were processed the old fashioned way, but all those tests began with the previously content rich document samples, not the abbreviated thoughts targeting measured data requirements that physicians inevitably (and predictably) settled into to minimally comply with system expectancies and their exploding documentation time.

 

Certainly, physicians were then blamed for not providing enough revenue generating details and tech vendors did their best to respond with even more automation tools and AI offerings. Unfortunately, the effort required to fully document other than generic cases still far exceeds earlier documentation methods. Now, when more complicated cases occur, physicians are left with the choice of spending more time in the EHR or surrendering the diminishing incremental bump in personal income that more time on the keyboard may deliver in exchange for seeing other patients for another full encounter base pay event.

 

Of course, every doctor, patient encounter, and EHR are different. But such personal time/personal revenue balancing acts are happening every day with every physician, because after all, time is money – even for them. Consciously or unconsciously, they decide daily where they will draw the line between being a documentation specialist and revenue cycle management clerk or being a doctor.


Righting the Ship


Admittedly, AI demonstrations can be awe inspiring. Unfortunately, by definition, they are conducted in controlled environments with limited variables, specifically staged to maximize the wow factor of the problems they contend to solve. They are not real-life daily operations. Accordingly, the objective must be to allow physicians flexibility and be sensitive to their concerns. They know what works best for them. Continuously throwing more training and technology at them inevitably wears them down to follow the undesirable but prescribed path, even if it means surrendering even more of their time and income.


The reality is, we have passed the saturation point for physician technology tools to generate clinical documentation. With each added structured input step or potential AI warning, providers are being forced to figure out how to game the system so they can get back to their patients. This causes the quality of documentation for ongoing patient care and RCM to suffer. As an industry, it’s time to acknowledge true physician efficiency leads to richer content, better patient care, and improved revenue results.

 

To quantify the true financial impact of AI, HIM can review organizational and per-physician historical case mix financial results and compare those numbers against national and regional averages. Those numbers should be quite telling. If you have experienced a negative slide, or even if you need more doctors to generate the same amount of gross revenue as in year’s past, your applied AI is more artificial than true.

 

To restore your per-physician case complexity and overall revenue results, the focus must be the same as that found in SEC. 4001 of the 21st Century Cures Act: improving quality of care for patients by addressing the growing problem of documentation overload. The age of Meaningful Use and the per-physician incentives are over. Now is the time to maximize physician efficiency at the point of document creation to ensure optimal clarity for ongoing patient care and justifiable revenue. AI tools should be focused on supporting revenue cycle management RCM efforts under coding and CDI, where they do improve detailed charge capture, especially when, like the original EHR parallel track RCM testing, you begin with the more comprehensive and accurate document input on the front end.


Ensuring Accountability


When certified EHRs were first mandated, oversight of clinical documentation creation was reassigned from HIM to the CIO as the mandate was clearly an IT driven initiative. Now that the dust has settled, however, the damage done to per-encounter revenue has many organizations thinking RCM executives will have to be the ones to clean up the mess and re-right the ship.


The problem with that approach however, is that RCM teams work backward from KPI financial results while the problems that need fixing are occurring at the front end of the workflow. To prove the point, all that needs to be done is a traditional quality assessment in compliance with the AHIMA standard ‘Clinical Documentation Quality Assessment and Management Best Practices’, which applies equally, regardless of the document generation method. And although this will only identify the problems generated under your current document creation workflow (versus identifying all that is potentially missing) it offers a good quality gauge of your process output.

 

Any vendors who suggests transcription will/should disappear due to improvements in their front-end speech AI offerings are telling you their business model based on how proud they are of their technology, not necessarily the best business decision when you consider physician time and the volume of errors that get pushed downstream. Obviously, comparative time and error volumes will vary by physician, but you would be well served to check the math for your physician roster for quality on the front-end of the document creation process and the added costs for coding and CDI efforts on the back end. When you add the impact of the diluted document details to your reduced per-physician revenue, the true cost of your AI solutions for some physicians could be quite scary.

 

Of course, the most obvious admission speech recognition SR is not the magic pill for document creation is found in how the industry’s major vendors calculate quality scores for documents generated with their technology. Even though both SR industry giants participated in the creation of the AHIMA documentation quality standard, neither uses that scoring method when reporting on their own quality performance to their clients. Specifically, they ignore repeated or missing text from their SR draft outputs, which are by far the most common errors of such technology. Instead, they apply their own assessment methodology designed to produce scores that look great against the AHIMA threshold, knowing they would consistently fail if the AHIMA scoring was applied. Why else would they ignore standards they helped create?

 

That being said, use of SR and AI have their place, work well for some providers, and will certainly continue to improve as technology advances. Yet, the most effective way to accurately and efficiently capture the full health story will continue to vary based on individual providers and the specific technology products being used. At the same time, ignoring the gap between where AI tools help and where they hurt is financially irresponsible, regardless of the claims made by the SR vendors touting their wares.

 

True intelligence is knowing when and where to use the tools at your disposal. It’s backwards, expensive and ineffective to try and cure bad document creation simply through increased coding and CDI. Consequently, even with growing scenarios of real success, for many providers, the suggested financial and quality gains delivered by AI are totally artificial – for the providers and the healthcare organizations they serve. It’s time for HIM to document, by provider, the true cost of AI on your business. Those are the numbers the C-suite needs for truly intelligent business decisions.

Dale Kivi, MBA
Senior Director of Communications, AQuity

Scroll to Top
Call Now Button