AI, health care and the realities of being human

There exists something in medicine called the “doorknob phenomenon.” It’s when a doctor, just about to leave the company of a patient (with a doorknob possibly in grasp), finally hears them divulge what has been weighing most heavily on their mind. Recently, a form of this phenomenon happened to me.

In the midst of a busy outpatient clinic, I was handed a chart. Attached to it was a referral note that read, “lymphadenopathy NYD,” which meant swollen lymph nodes that were “not yet diagnosed.” My patient was a middle-aged man, and quiet. I gleaned from talking with him that he lived an uneventful life. Pea-shaped lumps dotted his neck and armpits, and as I rolled them between my fingers I flipped through my mental checklist of potential diagnoses: the effect of medications, infectious exposures, the spread of possible cancer. I wasn’t yet sure of what caused them, I told him. I tried to be reassuring.

But as the exam went on, I noticed a change in his expression. Starting to explain the tests I would order, I saw that his feet began to shift, and he tugged nervously at his sleeves. “I get the sense something else is bothering you,” I said. His eyes then glossed over, and he began to cry.

In moments like these, I stumble. As a physician, my focus is roped tightly around a very narrow notion of healing: I follow a blueprint of tabulating signs and symptoms, rendering diagnoses, and setting forth treatments. But in practice, this script is prone to unravel, because at its end lies the larger and fraught forces that truly affect my patients’ health.

I spend a significant amount of time learning about my patients’ plights: the lack of money and food, the troubles with finding a job or a house, or the addictive substances that can hold a person tight in their grip. These situations are so often messy and convoluted, and so rarely possess straight through lines to a solution, that I increasingly wonder: How do they fit into a future of health care that strives to be so tidily packaged, helmed by the algorithms of artificial intelligence, or AI, that many are so eager to adopt?

Physicians are generally remiss to part with the ways passed down to them. But now many share a belief that AI holds a worthwhile promise of making doctors more clinically astute and more efficient. The technology would be a paradigm shift, for an overworked lot, that glitters with appeal.

How do messy human problems fit into a future of health care that strives to be so tidily packaged, helmed by the algorithms of artificial intelligence that many are so eager to adopt?

Yet how patients will receive the insights of these algorithms is far less clear. For instance, I can imagine growing more distant and less empathetic to the sincere concerns of my patients if AI were to do the emotional heavy lifting for me, drafting email replies “infused with empathy,” as suggested by a study recently released in the JAMA Network Open journal. Algorithms, we also know, are coded with, and can perpetuate, health biases based on factors such as race and ethnicity. They can opaquely label patients who need opioids as people with potential addictions, for example. And although AI might reduce our workload, in some cases by a sizable amount, we must be careful about the risk of overdiagnosing patients.

Adopting a more formulaic approach to medicine, it seems, is part of a natural evolution. Clinical scoring systems that I, and many physicians, regularly employ help to predict aspects of our patients’ health we cannot reasonably foresee. Feeding in certain parameters such as a heart rate, age, or a measure of liver function allows us to retrieve various likelihoods such as the chance of having a blood clot in the lungs, a heart attack in the next decade, or a favorable outcome from steroids for someone whose liver is inflamed by alcohol.

Yet the more familiar we become with these methods, the less certain we know those truths to be. A person’s health follows a multipronged course set as much by the mysteries of their biology as by the realities of where they happen to live, grow, and work.

Discrete outputs don’t always reflect discrete inputs. And as we steer the understanding of our health towards the very minutiae of our genetic and molecular makeup, what falls out of focus is the bigger picture of how fundamental social systems support our existence. AI in health care was valued at more than $11 billion in 2021, according to the global data platform Statista. By the end of the decade, depending on which projection you read, that number is expected to grow tenfold. Meanwhile, state and local health departments get short shrift. Their efforts to address gaps in critical areas like housing, education, and mental health remain chronically underfunded and the subject of stifling budgetary cuts.

I often think about what gives meaning to the help we offer patients, how that might be shaped by the precision and calculations of a looming revolution from a novel technology, or by greater investments into the social safety nets that undergird them. But, in the end, I come to the same the conclusion: that neither is of any use without the uniquely personal connections that sustain us.

A person’s health follows a multipronged course set as much by the mysteries of their biology as by the realities of where they happen to live, grow, and work.

I do not see my patients as an assemblage of data points in the way that they, perhaps, do not see me as merely a central processing unit in the flesh. Understanding them more deeply means precisely inhabiting these human moments together and bringing my focus beyond the neat tracks an algorithm purports to lay out. Endeavoring to put what ails us in a greater context then requires maintaining a curiosity for all of life’s wrinkles. A context, ultimately, that must recognize my patients for what they are, and will always be, as I am —distinctly, imperfectly, human.

An AI, one day, might diagnose the cause of my patient’s swollen lymph nodes, parsing a database through a sequence of responses to the software’s questions — all before I may have even met him. (It would take me until a second visit a few weeks later to reach a diagnosis.) But speaking to my own uncertainty and human limits, I believe, brought him an acceptance of his. He had moved from the city to a rural area to help his aging parents, and they soon passed away. This had left him in despair — jobless, depressed, and addicted, he told me.

“We can help you,” I said. “We’ll take it slowly. One step at a time.”


Arjun V.K. Sharma is a physician whose writing has appeared in the Washington Post, L.A. Times, and the Boston Globe, among other outlets.

This article was originally published on Undark. Read the original article.

Read more

about health care

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Web Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – webtimes.uk. The content will be deleted within 24 hours.

Leave a Comment