Question for your doctor? Artificial intelligence can help.

FAN Editor

Health systems are turning to artificial intelligence to solve a major challenge for doctors: seeing a steady flow of patients while also responding promptly to people’s messages with questions about their care.

Physicians at three different health care systems across the U.S. are testing a “generative” AI tool based on ChatGPT that automatically drafts responses to patients’ queries about their symptoms, medications and other medical issues. The goal is to help cut down on the time doctors spend on written communications and free them up to see more patients in-person, as well focus on more medically complex tasks. 

UC San Diego Health and UW Health have been piloting the tool since April. Stanford Health Care, considered one of the country’s leading hospitals, expects to make its AI tool available to some physicians beginning next week. At least a dozen or so physicians are already using it on a regular basis as part of the trial.

“Patient messages in-and-of themselves aren’t a burden — it’s more of a demand-capacity mismatch,” Dr. Patricia Garcia, a gastroenterologist at Stanford who is leading the pilot, told CBS MoneyWatch. “Care teams don’t have the capacity to address the volume of patient messages they receive in a timely way.”

The tool, a HIPAA-compliant version of OpenAI’s GPT language model, is integrated into physicians’ inboxes through medical software company Epic’s “MyChart” patient portal that lets clients send messages to their health care providers.

“It could be a great opportunity to support patient care and open up clinicians for more complex interactions,” Dr. Garcia said. “Maybe large language models could be the tool that changes the ‘InBasket’ from burden to opportunity.”

The hope is that the tool will lead to less administrative work for doctors, while at the same time improving patient engagement and satisfaction. “If it works as predicted, it’s a win across the board,” she added. 

Can AI show empathy?

Although corresponding with the new generation of AI is no substitute for interacting with a doctor, research suggests the technology is now sophisticated enough to engage with patients — a vital aspect of care that can be overlooked given America’s fragmented and bureaucratic health care system.

Indeed, a recent study published in the journal JAMA Internal Medicine found that patients preferred responses from ChatGPT over doctors to nearly 200 queries posted in a social media forum online. The chatbot responses were rated higher by patients for both quality and empathy, the authors found. 

Dr. Christopher Longhurst, an author of the study, said this shows that tools like ChatGPT offer enormous promise for their use in health care. 

“I think we’re going to see this move the needle more than anything has in the past,” said Longhurst, chief medical officer and chief digital officer at UC San Diego Health, as well as an associate dean at the UC San Diego School of Medicine. “Doctors receive a high volume of messages. That is typical of a primary care doctor, and that’s the problem we are trying to help solve.”

Notably, using technology to help doctors work more efficiently and intelligently isn’t revolutionary. 

“There’s lot of things we use in health care that help our doctors. We have alerts in electronic health records that say, ‘Hey, this prescription might overdose a patient.’ We have alarms and all sorts of decision support tools, but only a doctor practices medicine,” Longhurst said.

ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes 13:22

In the UC San Diego Health pilot, a preview of the dashboard displaying patient messages, which was shared with CBS MoneyWatch, illustrates how doctors interact with the AI. When they open a patient message inquiring about blood test results, for example, a suggested reply — drafted by AI — pops up. The responding physician can choose to use, edit or discard it. 

GPT is capable of producing what he called a “useful response” to queries such as: “I have a sore throat.” But no messages will be sent to patients without first being reviewed by a live member of their care team. 

Meanwhile, all responses that rely on AI for help also come with a disclaimer.

“We say something like, ‘Part of this message was automatically generated in a secure environment and reviewed and edited by your care team,'” Longhurst said. “Our intent is to be fully transparent with our patients.”

So far, patients seem to think it’s working. 

“We’re getting the sense that patients appreciate that we’ve tried to help our doctors with responses,” he said. “They also appreciate they’re not getting an automated message from the Chatbot, that it’s an edited response.”

“We need to be careful”

Despite AI’s potential for improving how clinicians communicate with patients, there are a range of concerns and limitations around using chatbots in health care settings. 

First, for now even the most advanced forms of the technology can malfunction or “hallucinate,” providing random and even erroneous answers to people’s questions — a potentially serious risk in offering care. 

“I do think it has the potential to be so impactful, but at the same time we need to be careful,” said Dr. Garcia of Stanford. “We are dealing with real patients with real medical concerns, and there are concerns about [large language models] confabulating or hallucinating. So it’s really important that the first users nationally are doing so with a really careful and conservative eye.”

Second, it remains unclear if chatbots are suitable to answer the many different kinds of questions a patient might have, including those related to their prognosis and treatment, test results, insurance and payment considerations, and many more issues that often come up in seeking care.

A third concern centers on how current and future AI products ensure patient privacy. With the number of cyberattacks on health care facilities on the rise, the growing use of the technology in health care could lead to a vast surge in digital data containing sensitive medical information. That raises urgent questions about how such data will be stored and protected, as well as what rights patients have in interacting with chatbots about their care.

“[U]sing AI assistants in health care poses a range of ethical concerns that need to be addressed prior to implementation of these technologies, including the need for human review of AI-generated content for accuracy and potential false or fabricated information,” the JAMA study notes.

Free America Network Articles

Leave a Reply

Next Post

Department of Transportation to propose requirements for airlines to compensate stranded passengers

The Department of Transportation announced Monday its plans for a new rulemaking process requiring airlines to compensate passengers whose flights were delayed or canceled. The proposal would require airlines to provide compensation and cover expenses for amenities, including meals, hotels and rebooking flights, when airlines are deemed responsible for stranding […]