How AI could change patient care, not replace it
As artificial intelligence (AI) tools like
Maraccini leads
She argues that AI itself is not what weakens the patient-clinician relationship; it is poor implementation. Done well, she says, these tools should support and strengthen that relationship, not bypass it.
The following transcript was edited for style and clarity.
With ChatGPT Health recently launched, what’s different about these tools, compared with earlier generations of online symptom checkers or general-purpose chatbots?
I think the biggest thing is it is not just a symptom checker. When we think about the function of ChatGPT Health, it actually becomes a narrator of health data.
Think about somebody who is able to talk you through the results that you are seeing, which is actually really beneficial to the overall experience and how you are connecting to the information.
When we think about how ChatGPT Health is not only able to deliver information, it is actually being used to interpret results and integrate across multiple data sources, which is huge.
The most critical component when we think about trust and relationship building in health care is that it is able to do that in natural language. Instead of just numbers — when you are seeing lab results and it is like “0.0045&” — that is not all that helpful. As a matter of fact, it can be more stress-inducing.
When we think about natural language, it has this authoritative way of communicating that previous tools did not, because people are more connected to the information they are getting.
For the first time, patients are able to encounter meaning before they actually encounter their clinician. This is a really big deal. It is powerful, but it is also risky, because there is an emotional reality that we have to keep in mind with the information.
These AI platforms are in front of patients more than ever before. Can you walk through an example of how these tools could realistically prepare a patient pre-visit?
Any human — especially adult humans — has likely been through a scenario where they are receiving health information or diagnostic test results. Maybe you are a caregiver or a family member of someone who receives information.
Oftentimes, lab results get processed at the most inconvenient times, like late at night. You get an alert on your phone, which typically happens in the evening. You see this result, anxiety peaks, you do not know what is going to happen or how to interpret it. You immediately start Googling worst-case scenarios, and your fear starts building around the situation before you even have any context.
That’s what happens without AI. That’s what happens before the use of a tool like this.
With more thoughtfully designed AI and the way that ChatGPT Health and some of these other technologies are being leveraged, we have results that can be translated into plain language, or a language that you prefer. That is a big piece of it too, being able to translate in your preferred language, in a way that you are often interacting with ChatGPT today.
This allows for more common or non-alarming explanations that can be provided. As a result, when patients walk into their visit to discuss the results with the clinician, the conversation shifts from just decoding acronyms to actually having thoughtful and meaningful questions around the patient’s life, values and goals.
This is huge. Being able to give yourself that distance between the information and the reaction is critical.
ChatGPT Health and these other technologies can also be used to prompt and promote and suggest: consider asking this question for your visit, or be mindful about these things and how they relate elsewhere. It is a great way of helping to create space for the human judgment and empathy that should be supporting these conversations.
Physicians may worry about an influx of patients brining AI-generated “plans” or conclusions into visits. How should physicians approach that conversation so the encounter is productive, instead of combative or time-consuming?
I love this question. The reality is this is not a new practice. Patients have always brought their own information into appointments because they are trying to be resourceful and trying to show up with good intentions to educate themselves, whether it is printouts, apps or Google searches. AI just happens to be the newest version that has a component of translation and is trying to take in as much context as possible.
What I would recommend to clinicians is that the most productive path is not to dismiss the effort. Instead, look at the information and say, “Let’s look at this together. How can I help digest this?”
There is no way for ChatGPT to have the entire context. It only has the context of the information you are feeding it. The clinician is going to have a broader range of information to help paint that additional piece of the picture.
When you as a clinician are showing that you are not just an interpreter of data, it actually adds to your credibility and trustworthiness as a clinician to be able to say, “I am also an interpreter of the larger context. Let me help paint the larger picture for you.”
That is a huge piece: for clinicians to understand and focus on what part resonated with and was most concerning to the patient, so they can bring in the rest of the context and make sure they are oriented around the information that matters most.
It’s no secret that AI can make mistakes. Are these tools truly safe for patients to use today? And what should clinicians be doing to put safeguards in place?
It’s not that AI can make mistakes — it will make mistakes. It’s not just a hypothetical, it’s a reality.
The best thing we can do to empower patients is to educate them on those disclaimers so we are making sure we are talking about these tools that people are exploring, whether or not you are promoting them.
Let us instead enable and create education and have conversations about it proactively.
As a clinician, what I would be talking with my patients about is: How are you receiving information? What is the best way for that to be effective and resonate with you?
If AI is one of the topics that they are focused on, I would share with them: Here are things that you should consider when you are looking at that, and how to best prep for our conversations together when we digest the information together.
Instead of clinicians trying to shy away from or avoid those conversations, they should lean in proactively and teach their patients how to best use the technology.
What are some red flags you see when AI starts to displace clinicians in patients’ minds? How should health systems or practices design guardrails to prevent that?
A red flag is when the patient is saying, “It is easier to ask AI than my doctor. It is easier to interact with AI than my doctor,” or “AI is more empathetic than my doctor.”
To me, that is more of a red flag on the relationship and the communication skills with the provider. There is a human element around those care interactions – particularly anything happening at the bedside – around relationship-centered communication that has to be kept at the forefront.
As we see patients leaning into technology and AI capabilities more, it almost heightens and increases the expectation for physicians and clinicians across the spectrum to educate themselves on how to lean in more as a human connection point to patients’ health care.
When you see difficulties in communicating, understanding or feeling valued as a patient, it is going to be so much easier for them to pivot and lean more firmly into an AI conversation.
For a practice that’s already overwhelmed, where does a tool like ChatGPT Health realistically sit in the patient journey?
There are some key moments that would be beneficial to leverage the information that is out there, particularly around education: what to expect for an appointment and how to prep for an appointment, so that the time when the patient is in the office is most meaningful.
What happens a lot of the time is when a patient is hearing information for the first time in the office, they are stuck processing that information before they have the ability to present what questions they might have about it.
If they can do their prep in ChatGPT Health or these other applications we have talked about, it could be a great way to prepare yourself for the visit so that when you are there, you are going through a conversation with your provider instead of your provider just giving you information. I think that is a huge piece.
Any type of post-visit reinforcement is also important. If you have some key takeaways about how to act on the information that your provider is recommending – based on your care plan and overall history – AI tools can help reinforce those, because again, clinicians can provide that full context.
These tools can also help with navigation or next steps: what to do from there and how to best connect with resources throughout the health system.
Looking ahead a few years, what should a skeptical physician watch for in their own practice to know AI is actually helping relationships and outcomes?
A key takeaway is that if AI is working in an intentional way, clinicians will not feel replaced. Instead, they are going to feel more present.
What will happen is these interactions, in the actual appointment time when you are face-to-face with your clinician — whether that is telehealth or in person — you are going to start seeing better questions, not longer visits. You will have more efficient visits, with more quality time spent versus the quantity of time needed to address concerns, and less time explaining basics.
Ideally, if this is being used correctly, those basic questions will be sorted out prior to the visit or during those in-between interactions. That is an exciting time for patients, who will become more calm and therefore more prepared to have a more meaningful interaction with their provider.
Is there anything else you would add that we haven’t covered?
The last note I have to share is that when we think about the future, we need to not think about it as AI versus clinicians. How do we use AI to create more of a space for meaningful interactions of human connection and trust is ultimately what is going to be the greatest outcome of having those moments with providers.
Using this technology in an intentional, meaningful way will ultimately create more empowered patients and, on the provider side, less burnout and more quality interactions that they will gain value from themselves.
They will be able to provide that additional context, that more holistic picture and, ultimately, the greatest human connection back to the patients they are trying to serve and drive health outcomes for.
link
