Home Part of States Newsroom
News
Artificial intelligence already plays a part in Kansas City health care, without much regulation

Share

The Deciders series background 1

Artificial intelligence already plays a part in Kansas City health care, without much regulation

Feb 26, 2024 | 9:44 am ET
By Suzanne King
Artificial intelligence already plays a part in Kansas City health care, without much regulation
Description
At Children's Mercy Hospital, an AI-powered Patient Progression Hub is helping reduce paperwork and curb employee burnout (Courtesy photo/Children's Mercy).

In September, the mom of a 4-year-old boy made national news when she used ChatGPT to diagnose her son’s pain, teeth grinding and leg dragging, a diagnosis that had eluded 17 doctors over three years.

It’s an increasingly common scenario: Patients are looking to artificial intelligence for health care answers.

And so are medical pros.

AI plays a growing role in health care encounters, whether or not a patient is actively typing symptoms into a chatbot.

At Kansas City-area hospitals, for example, AI predicts and manages the availability of hospital beds, charts staffing levels, reads mammograms, writes notes after a patient examination and helps respond to messages from patients.

But as AI’s tentacles reach deeper into medical practices every day, the fast-evolving technology faces almost no regulation. Arguably, the medical industry has little incentive to push for regulation on a trend that could transform the business of health care. And right now, nothing stands in its way. Hospitals aren’t even required to tell patients when they’re using AI.

“Because, unfortunately,” said Lindsey Jarrett, vice president of ethical AI at Kansas City’s Center for Practical Bioethics, “no one’s really telling them they have to.”

Experts worry that biases baked into AI could cause real harm to patients, and even as the industry gives a nod to helping write regulations, they are busy trying to exploit all the cost-saving and staff-trimming possibilities the technology has to offer. Meanwhile, patients remain largely skeptical that it’s a good thing.

Why AI?

Facing increasing costs, shrinking reimbursements from insurers and a growing shortage  of nurses and doctors, hospitals see many reasons to roll out technology that has the potential to save time and, as a result, boost profits. Proponents also say AI brings the possibility of better care and could supercharge research into diseases and treatments.

Put simply, AI is a computer system imitating human behavior, only with vastly greater capacity for taking in and making sense of information. Potential uses, said Tony Jenkins, assistant director of IT initiatives at the University of Kansas Health System, stretch from billing offices to human resource departments to patient care.

“It looks through and parses all that data, and helps us find patterns that are actually beyond human recognition,” Jenkins said.

Computer processing power only recently became robust enough to sift through years’ worth of hospital data or patient records and come up with something useful. Now that it can, possibilities for it could be vast, Jenkins said.

Hospitals and medical clinics constantly feel pressure to do more with less, he said. To cope with the problems of an aging population. And to make better use of the roughly 17% percent of the economy devoured by health care costs.

“There are a million problems that exist in the industry,” Jenkins said. “Using technology to augment (staff) is always going to be at the forefront of any sort of transformation.”

At the same time, hospitals must be vigilant with security.

How Kansas City hospitals are using it

KU has been trying out AI in many parts of its business, including in patient exam rooms. About a year ago, a handful of providers began a trial of Abridge, technology that records patient visits and then transcribes notes based on the recordings. Almost a year in, the pilot program has grown to include several hundred doctors.

“That’s allowing our clinicians to remain more focused on the interaction with the patients in the room,” Jenkins said. “They’re no longer having to be heads down, fingers on a keyboard.”

In the end, chart notes can be more accurate, too, because they come directly from a recording rather than piecing them together hours after an exam when a doctor finally has time to sit down to do paperwork.

“The system is pulling out the medical things (to include) … and allowing the physician to confirm, validate that what the system provided was accurate and add anything extra if they would like,” Jenkins said.

It gives doctors back hours in their days, Jenkins said. Something other AI applications also can do.

At Children’s Mercy, a NASA-like operations center that relies on AI and predictive analytics cuts the time spent on paperwork. The hospital’s Patient Progression Hub helps anticipate bed capacity, speeds up patient discharges and monitors many aspects of patient care and hospital operations.

Jennifer Watts, who oversees the hospital hub, said referrals of children from other hospitals improved dramatically with AI help.

“It was a pen-and-paper game and multiple people were involved,” she said. “We had to tell our referral, ‘Let’s see if we have something available, and we’ll let you know as soon as we can.’ There were multiple phone calls in the background. … Nine times out of 10 we were still able to say yes. But the workload on our end was pretty big … Now we can say yes faster.”

Children’s Mercy’s hub has also reduced the wait time for patients ready to be discharged. Once a doctor has signed off on a patient going home, it used to take hours or even days to complete the process. But now, Watts said, most areas of the hospital have cut that to well under two hours, largely because the AI-driven system can pinpoint where the holdups are and nudge staff to clear the path.

“This was a big accomplishment,” Watts said.

Now hospital workers get less buried in paperwork or tied to the telephone, she said.

“The goal is for our front-line staff is to do our front-line care,” Watts said. “That’s what they’re trained to do — our nurses, our doctors, our techs.”

And by removing some of the administrative burdens from doctors’ and nurses’ to-do lists, Watts said she’s noticing fewer symptoms of burnout.

Working behind the scenes

While the health care industry is quick to tout the potential advantages of using AI, patients are generally skeptical. A Pew Research Center survey last November found that 60% of Americans said they would feel uncomfortable if their health care provider relied on AI when providing care. Meanwhile, 33% said AI would lead to worse health outcomes, 38% said it could lead to better outcomes and 27% doubted it would make any difference.

Some aspects of AI used in health care are obvious to patients. For example, when a KU doctor uses Abridge to record patient visits, the patient is informed and gives verbal consent before it’s turned on. But other AI uses in health care aren’t so transparent.

Medical technology companies embed AI into systems that providers already use to treat patients. The technology may be so seamless, a doctor may not even realize AI is involved. Neither would patients.

“They don’t have full awareness of how AI is actually embedded already so deeply into our decision-making,” Jarrett said.

That’s a major problem when it comes to AI in health care, because so many factors that could affect care are in play, she said. Patients, and importantly doctors, should know when AI is being used so they can dig deeper and find out how the AI was developed and if there are potential biases that could affect care.

If AI-enabled products are “trained” with information that is biased in some way, it has the potential of doing real harm in a health care setting. If a tool was built with data gathered from white patients, for example, it might not accurately inform doctors how to treat Black or Asian patients.

Jarrett said the pandemic, which became so entwined with discussions about embedded discrimination in the country, has broadened the conversations people are having about AI in health care. It’s made people think more about how the populations involved in training models could affect care.

“Had the pandemic not happened,” Jarrett said, “we would have continued to have conversations about AI in regards to privacy and security … but we wouldn’t have started to have conversations around bias and patient impact.”

Industry stepping in to set standards

In the absence of governmental regulation, health care providers have no reason to assume that technology was developed with patient safety or medical ethics in mind. Increasingly the industry is taking on the role of establishing standards for how the health care industry can safely and ethically use AI.

Since 2021, the Center for Practical Bioethics, through its Ethical AI Advisory Council, has been mapping out AI standards it hopes all Kansas City-area health care providers will adopt. The council includes leaders from major health systems in the area, representatives from health care technology companies; community advocates; leaders in diversity, equity and inclusion; social workers; and nurses and doctors.

The Center for Practical Bioethics is also involved in directly helping implement AI at KU and Children’s Mercy, with hopes of working with other hospitals. The group helps hospitals develop standards for when AI is appropriate, and it teaches both technical and clinical staff to understand the ethics behind using the technology.

The group also wants health care providers to know what questions to ask when they consider using AI technology. For example, doctors need to ask, what has the technology developer done to mitigate bias? What algorithms were used to create it? At the same time, Jarrett said, patients need to ask who the technology was developed for and whether it would work on people of their age, race or gender.

“Developers creating these things in health care should be able to answer those questions,” Jarrett said. “But they’re not required to answer them today.”

Eventually, Jarrett wants a rating system that would easily tell patients whether a hospital has put the right guardrails around its use of AI.

“We have to be able to have these organizational policies where providers can say, ‘Oh, yeah, that is an AI product. And I know how to explain that to you and why I’m using it,’” Jarrett said.

Beginnings of regulations

As recognition grows about the potential dangers of AI, government agencies, Congress and state legislators are starting to look at how to regulate it.

Late last year, the White House issued an executive order about AI, which began by saying that the technology “holds extraordinary potential for both promise and peril.” The order specifically called out health care as an area “where mistakes could harm patients.”

Late last year, the U.S. Food and Drug Administration updated its list of approved AI medical devices to include 692, adding 170 over the previous year. The list includes machine learning devices, which have the potential to gain insight from vast amounts of data accumulated in hospitals and doctors’ offices every day.

The vast majority of approved devices — 87% — involve radiology, while 7% involve cardiology, and a  tiny percentage of approved devices are used in neurology, hematology, gastroenterology/urology, ophthalmology, clinical chemistry and ear, nose and throat medicine.

The agency has not approved any devices yet that rely on generative AI, like the technology used in ChatGPT. But that doesn’t mean the technology isn’t in use in health care. Many trials of AI are still unapproved because regulations are still nascent.

The U.S. Department of Health and Human Services established an AI office in 2021, and the agency has both encouraged the adoption of AI and called for establishing the HHS AI Council to work on governing it.

Meanwhile, a branch of HHS that regulates health IT published a rule last year that would require electronic health vendors to display basic information about how a model was trained or developed.

“What were the patients that the model was trained on?” said Brian Anderson, a founder of the Coalition for Health AI (CHAI). “That would inform its accuracy on how it might perform on patients once it’s deployed.”

But while the rule calls for transparency, it does not specify standards. The rule defers to the industry on what the specific standards should be. And, while many in the industry see a need for regulations, it also wants a part in creating them.

That’s something CHAI is working on. The group, which includes industry leaders and regulators, last year issued a Blueprint for Trustworthy AI in Healthcare and is working on developing industry standards for the responsible use of AI.

“It’s gonna be a hard needle to thread,” Anderson said. “Part of the hope is that bringing the regulators and the innovators together, that whatever approach the regulators take, it’s not going to stifle the innovation that’s already happening.”

Right now, industry is making progress establishing guardrails. But ultimately that won’t be enough, Anderson said.

“Guardrails by their very nature are voluntary,” he said. “You could crash into the guardrails or jump over them if you wanted to.”

This article first appeared on The Beacon and is republished here under a Creative Commons license.

Artificial intelligence already plays a part in Kansas City health care, without much regulation