The New York Times recently ran an article titled “Empathetic, Available, Cheap: When A.I. Offers What Doctors Don’t”, which should be very concerning to the medical profession as it emphasizes three things that they are often not. But probably won’t concern the real decision makers in healthcare – the corporate owners, “health systems”, insurance companies, and private equity. After all, their concern is solely making money, and they are doing just fine, thank you.
The article indicates that AI seems to be responsive to and nice to people, and seems to show respect, concern, and empathy; “seems to” is important, because these are computer programs, not people, and they don’t have any feelings. Nonetheless, people feel better when they are addressed with respect, concern, and compassion. Even if it is programmed and not real. The truth is that doctors and other actual people do not always do so, for a variety of reasons. And they don’t even have the chance to if the patient cannot contact them, which is so common as to be routine these days
For many years, I told medical students that, while they had worked very hard to master the language of medicine, learning idioms, jargon, eponyms, and acronyms so they could fit in and impress their seniors, residents and attending physicians, regular people would not understand them if they spoke like that. They had to be able to translate that back into their first language, English (or whatever their vernacular was). This is an important skill, for without it people (“patients”) won’t understand what you are saying, and won’t know what is going on with them. And that is important. It takes effort, and it takes intentionality – you must want the person to understand what you are saying. That’s is true even if what you are telling them is bad news, something that will make them upset or unhappy.
I thought about this after a recent conversation with a couple of current medical students. I made the points above, about the importance of communicating in a way people can understand, and observed that, in fact, often people did not understand. This was based on, among other things, the number of times I had to try to explain to my patients, as a family doctor, what their specialist was saying. And the number of times I had to try to figure out, as a family member or friend, what my family member or friend’s doctor had been telling them that led them come away with what seemed to be an incorrect understanding of the situation. I have even said “If you assume that no one ever understands anything their doctor tells them, you will be correct a distressing percent of the time”.
The students agreed, but when they gave examples from their experience, I became more concerned.
A surgeon I worked with was unable to get all of the cancer out, but when telling the patient used all kinds of technical and unfamiliar terms, like ‘clean margins’. It was like they were trying to not lie, but to obfuscate what they were saying by talking in words and phrases that were technically true but not meaningful to the patient. I was left, after the surgeon had gone, to try to respond to the patient who asked me ‘What did they just say?’”
Obviously, this should not be the job of the medical student, but of the surgeon. And while it is tempting to say, “Well, they’re surgeons; communication is not their strength” (and while, as a family doctor, I like to think we are better at it), most or all doctors are guilty of this sometimes. (It is also true that it is even harder when you have to acknowledge that the bad news may, in fact, be the result of something you did wrong, but this is a separate area.)
I have recently had experience with close family members who had complications during procedures. One, during an endoscopy, had their blood oxygen level drop and had to have a breathing treatment afterwards, receiving a new diagnosis of asthma. This was upsetting, but at least they were told everything. Another, in a much more concerning episode, had major lung surgery. After the surgery, they had terrible, persistent pain which was not adequately treated. Several months later, visiting another doctor (not the surgeon), they were told that their oxygen level had also dropped severely, as a result of having a pneumothorax, a serious, potentially dangerous condition where air gets into the chest cavity and can partially collapse the lung. More relevant, it can be terribly painful. This might explain why the nurses, following their pain-management algorithms, did not give the patient sufficient pain medication. It is still not clear if they were told their patient had a pneumothorax, but it is definitely clear that the patient, my family member, was not told. They should, of course, have been.
There are a lot of potential problems with AI providing people medical information, some of which are discussed in theTimes article. For one thing, it could be wrong. It doesn’t really know you, and part of the reason that you are consulting the medical AI (or real clinician) is that you don’t actually know either exactly what is wrong with you, or how to put it in terms that will get you the correct answer to your question even if the AI is capable of getting the correct answer. Of course, sadly, the same can be true of real doctors, especially when you don’t actually speak to them; the article leads with the story of a person who wanted advice on how to increase the protein in their diet, and received generic – and unhelpful – answers from the physician on line (presumably a “patient portal”). For all we know, they could have been AI produced.
It would be much better – some of us would say essential – for doctors to communicate fully and honestly with their patients, using language that they can understand, even when the news is not good. And for them to be there, being, well, patient, while their patient tries to formulate questions, and answer them. But there are a lot of reasons that they don’t, or can’t.
A part of it may be that they are poor communicators, or uninterested in having their patients understand everything, especially if it could be embarrassing or take a lot of time. But AI doesn’t have that problem. It is not paid by the patient, and it has no set number of people it has to see in a given amount of time the way that real clinicians do. These actual clinicians often work in hamster-wheel conditions (time spent not only seeing patients but having to do electronic charting aimed at maximizing profit via upcoding as much as possible) which are not the fault of the doctor, but of their employers who are interested in “throughput” to make as much money as possible. Saliently, procedures are relatively well reimbursed but spending the time necessary to talk to a person to be sure that they completely understand what is going on is not. Of course, this is also part of the reason that there are fewer students entering primary care and more are entering better-paid procedure-based specialties.
Having a health care system that valued, and paid for, communication would be good. It would have to start with a system designed to maximize the health of our people, not corporate profit. Yes, there would still be some doctors who communicated poorly, and even made poor medical decisions, but those could be dealt with as individuals, rather than having them intrinsically encouraged by the system.
Doctors could and should do better, and maybe there is a place for AI. But there is no place for profit in healthcare.