Clinical Scenario
You are delighted to see your first patient of the day, an 80-year-old female with Sjögren’s syndrome and various immunological deficiencies for whom you have cared for many years.
Miss Vivian, as she wishes to be called, always comes to her appointments “dressed to the nines.” Before retirement, Miss Vivian was a science teacher at the secondary school level and maintains a keen curiosity about the topic, including current advances that intrigue her intellect.
Part of your joy in caring for Miss Vivian is the time for light conversation and catching up with events in each other’s lives. Today, however, Miss Vivian seems too troubled for small talk and inquires if you will discuss a concern that is currently bothering her.
You: “Of course, Miss Vivian, let’s talk about what is bothering you.”
Miss Vivian: “Doctor, you have cared for me for many, many years, and I have greatly appreciated our relationship, the time you take to listen to me, and the excellent care you have provided. You are a very caring, courteous, and honest person.”
You: “Well, thank you. That is always my intent. Please, continue with your concern.”
Miss Vivian: “Well, I have been reading so much about this artificial intelligence in medicine, and how it is going to take over patient care. The information I read indicates that there are risks for information security, incorrect diagnoses, overreliance on artificial intelligence, and poor regulations on the procedures that guide what artificial intelligence does with its diagnosis and treatment guidelines. I am really worried about all of that, especially at my age.”
You: “Miss Vivian, I well understand your concerns, and, honestly, I have some of the same concerns you have. This AI movement in medicine is moving so fast, many physicians cannot understand or keep up with it.”
Miss Vivian: “Exactly, Doctor. But, even given all of that, it is not my main anxiety. What really bothers me most is how much will you have to rely on the computer and artificial intelligence to care for me, and to be honest, I worry that our patient–physician relationship, which means so much to me, will be negatively affected. I am so pleased with what we have now and do not want to be cared for by a computer robot. Will you still be my ‘real’ doctor?”
Discussion
This is not just a fictional narrative to introduce the topic of patient concerns with AI in their healthcare—colleagues are relating that this conversation is presenting with increasing frequency across the range of their patient populations. It cannot be assumed that knowledge and apprehension of the expanding/exploding introduction of AI into healthcare are experienced by only those patients interested in science or current events. Patient concern is indeed present, and physicians must understand it so that mitigation can be undertaken.
As Miss Vivian points out, patient concerns include, but are not limited to, personal health data security, threats to autonomy and choice, potential increases in healthcare costs to them, unknown “black box” development of data survey and conclusions, the validity of AI interpretations of data and recommendations, accountability for errors and misinformation, and the impact of the use of AI-generated guidelines on the sanctity of the patient–physician relationship. There are also alarming and misleading speculations about physicians being replaced by avatars or virtual robots. It is clear to this author that physicians have an ethical obligation to the patient to consider these misgivings and address them with their patients who present these concerns.
The use of AI in healthcare is not new but is far from universal currently. Pathology and radiology seem to be the foremost focus for the application of AI to diagnostic capabilities, although current considerations for programming and user interface design are under study for many other specialties. Both large and small language models (LLM, SLM) are machine learning systems that have many potential applications in the field of otolaryngology–head and neck surgery. AI potentials for otolaryngology–head and neck surgery applications have not yet been fully realized but likely will be expanded rapidly, given the velocity and trajectory of AI research into healthcare utilization. While AI may hold great promise for efficiency and effectiveness in patient care, both obvious and potentially hidden harms and ethical concerns need to be addressed, sooner rather than later.
It’s predicted that the capabilities of AI and AI-supported applications will include enhancement of pertinent data summaries that are far more comprehensive than individual physician literature searches can provide. Then, there is the issue of how the AI data will bear upon clinical decision-making and patient choice. While large data analyses can provide population-based treatment guidance, only the physician can discern whether the recommendations accurately apply to a given patient. That is where clinical judgment and open communication with patients matter.
Other patient concerns may include the risks of making therapeutic recommendations based on incorrect data—particularly if physicians begin to rely exclusively on AI for guidance—dangerous data that could lead to patient harm. Where will responsibility and accountability be placed for medical liabilities that result from utilizing incorrect or misleading AI-generated therapeutic recommendations? How will responsibility and accountability be determined in the event of a missed diagnosis or therapeutic mishap? These are concerns for both the patient and the physician.
Patients may rightly be concerned about security issues with their personal health information, particularly in this age of hacking and stolen information. How will the designs and implementations of the AI capabilities account for these security risks? We all now tend to believe that NO data are safe from criminal intent and theft, so as physicians and patients ourselves, in addition to the patients we serve, we must be transparent about risks and security breaches that prey on the inadequacies of the algorithms. Can AI-supported information be utilized for nefarious purposes? One’s imagination can run rampant on that issue. Patients rightfully expect their most personal health information to remain private, and they must be assured this is the case.
Another issue of both interest and concern is whether the use of AI-generated and supported diagnostic determinations and therapeutic recommendations should be considered as an “informed consent required” utilization, as is currently used for procedural and other clinical interventions. If so, then a patient could “opt out/in” through signed consent to use AI in their care. The consent could be part of a patient education document that explains how and when AI information would be utilized in their care, and choices could be multiple, as with directives to physicians. Recognizing the benefits, risks, and alternatives of AI-supported healthcare decision-making is both timely and necessary for the well-being of the patient.
The final issue of importance to the ethical care of patients lies in the proper conduct, development, and maintenance of the patient–physician relationship, which is sacrosanct to medical professionals. Physicians are empowered to care for patients by acquiring knowledge, garnering clinical experience, and developing acumen and judgment. The virtues of honesty, transparency, trustworthiness, compassion, understanding, kindness, duty, and wisdom, in the context of the four ethical principles of patient care—autonomy (self-determination), beneficence, nonmaleficence, and social justice—must be at the forefront of all of our clinical endeavors.
As we consider how to respond to Miss Vivian’s concerns and anxiety, we must realize that our first duty is to listen—to let Miss Vivian know that we want to understand her concerns and will not brush them aside as unimportant or trivial. To Miss Vivian they are real, and many physicians have similar concerns. This is a great opportunity to reassure Miss Vivian about the importance to you of the patient–physician relationship, to let her know that you value and will protect as much as is humanly possible the integrity of her personal health information and its security. You can then provide her with the knowledge that you have on the trustworthiness of the information you would plan to apply to her care, emphasizing that AI will never supplant your responsibility to personalize her care according to her self-empowerment and that you will use your clinical judgment and joint decision-making recommendations utilizing AI data with integrity and discernment. A physician’s wisdom and bedside manner are not machine generated or machine learnable—the “art of medicine” is equally important to the science of medicine, and we must live that responsibility with each and every patient.
Dr. Holt is professor emeritus and clinical professor in the department of otolaryngology–head and neck surgery at the University of Texas Health Science Center in San Antonio.