Arif Kamal, MD MBA MHS FASCO FAAHPM
AAHPM President
Since its founding, palliative care has been characterized by therapeutic presence, humanistic warmth, and multidisciplinary support. These practices distinguish the field, and artificial intelligence (AI) may feel like a threat to the ethos of the specialty. The rapid incorporation of AI into healthcare delivery has brought worry that care may become sterile, impersonal, or less human. For decades, we have counseled patients and caregivers about the risks and benefits of technology-based interventions—not because of our predilection to being Luddites, but because sometimes a person’s values and wishes do not align with the newest and shiniest technologies. As clinicians incorporate greater uses of AI into other specialties, how do we manage this integration into the world of serious illness care without losing the human touch that differentiates our work?
Some people may perceive AI as a threat to the “high touch” delivery of palliative care, but I see an opportunity and an ally. Particularly if the “A” in “AI” is reimagined as “assistive”—helping humans be smarter, more efficient, and more resilient at their jobs. In that view, AI is a complement to human-delivered palliative care, assisting us like opioid analgesic tables and prognostic calculators and serving a critical adjunctive purpose to help us be more productive and better with our work.
At its essence, AI harnesses the mathematical skills of computers at speeds and with complexity beyond human capabilities. Just like humans, given too much power and too little guidance, computers can get a bit wonky (sometimes referred to as hallucinating), tell you what you want to hear (even if it’s not true), or aim to be more helpful than factually possible. Like my 7-year-old son Elias, AI responds to guidance, training, and feedback—but it also cannot be trusted on its own to answer every question or solve every problem. I’m reminded of the old adage “trust but verify”—there will always be a role for human oversight and direction in the delivery of palliative care. And, when things go differently than planned, compassionate clinicians will always be needed to provide context where it’s absent and clarity when situations are mystifying.
Standing at the precipice of enormous possibilities and several unknowns, I see three areas where palliative care could be revolutionized through the assistive intelligence brought by computers.
Finding Who Needs to Be Found
There is a common saying in the world of healthcare quality improvement: “First do it the same, then worry about doing it right.” This statement is often used to eliminate unnecessary variation in a process and then give attention to the best, right way to conduct that process. Applied to the evolution of palliative care delivery, we have spent the last 2 decades giving considerable attention to standardizing our processes. Referral criteria, note templates, and training programs have contributed to smoothing out site-by-site variation in practice. Most consultation services use some kind of referral criteria (eg, the surprise question, prognostic threshold, acute care utilization threshold), and the composition of teams and the issues addressed are roughly similar across the country.
For the next issue of “doing it right,” how do we achieve true person-centric care? Ideally, we identify which people could benefit from consultation, time the initiation of our specialty services upstream to extremis or decline, and focus on the right issues with the right team members. Such an approach would ensure we take less of a one-size-fits-all approach and more of an Institute for Healthcare Improvement approach: the right care, at the right time, to the right person.
This is where AI comes in. The benefit of accelerated computation is the ability to collate, analyze, and interpret millions of data points to provide a recommendation. Imagine a hospitalist rounding on 20 patients in a given day. Typically, they would be required to collate and analyze current and historical electronic record data to determine if the patient could benefit from palliative care consultation. What if AI assisted by computing the data from the electronic health record and investigating other mitigating factors, such as information related to the patient’s neighborhood, caregiver health history, and pharmacy history, and passively collected smartphone data on activity level and communication habits? Working backward from a bad outcome (eg, unexpected emergency room visit), the AI assistant could rank order relative risk among the other patients on the census, enabling the hospitalist to refer those who need specialty palliative care the most. Furthermore, while AI could know what factors are causing distress today, it could also predict what will worsen tomorrow, suggesting the urgency of the consult request and which team members (eg, chaplain vs nurse) should attend the visit. Using this approach, AI suggests how palliative care services can be tailored to patient risk and need, ensuring patients get the right care to address both the distress we can see and the distress on the other side of the next hill.
From One, to Many
Zooming out, we can also see the applications of AI assistance at the population health level. As primary palliative care skills are learned and strengthened among all clinicians, specialty palliative care services will increasingly be needed for those with the most complex, time-intensive, or urgent needs. In the current wait-and-see approach, healthcare has been designed to be reactive, requiring patients to face a crisis and visit the emergency department or be admitted to the hospital before wrap-around systems are activated. In a more proactive approach, AI could serve as a continuously updated radar system, with population health clinicians (eg, navigators) sitting in the seat of air traffic control. Watching over a large population, AI could “see” worsening trends and provide guidance to intervene before a patient faces a crisis.
Imagine an older woman with several serious illnesses, including debility. A continuously monitored radar system collating data points (activity level, mood, social connectedness, disease trajectory, symptoms) provides a real-time tracker of serious injury (eg, fall) or unpreferred outcome (eg, hospitalization) and suggests deploying home-based palliative care services when she is at elevated risk. In another scenario, a man with chemotherapy-induced neuropathy is predicted to have worsening symptoms when the AI-assisted system detects upcoming changes in temperature and relative humidity, notes partial fills of prescribed anticonvulsants for symptom management, and predicts that the primary caregiver will not be able to check on her dad due to travel difficulties from living across town. Based on a calculated “elevated risk” of poor outcomes, the system prompts a navigation phone call to the patient to encourage certain behavior changes (eg, warm socks) and communicates with his pharmacy to dispatch a delivery person to bring additional medication to his home.
Importantly in these patient cases, clinicians are relying on comprehensive and accurate data collection. As many people with serious illness use a smartphone, and as passive voice/video/sensor data collection mechanisms are ubiquitous (eg, home security, digital assistants like Alexa, smart appliances and TVs), the challenge we increasingly face is one of collation, analysis, and interpretation. Then, with that information, a prediction of the next best step can be formulated. AI is well poised to perform these steps, faster and more efficient than any human.
Compassion Preservation in the Workforce
Many of us are in palliative care for personal reasons: an experience with a loved one, a strong connection to humankind, a knack for empathy. While we embrace our calling, it’s not without its challenges. We’re no strangers to what it feels like to be understaffed or overworked. Or the heartache and energy drain that may come from witnessing someone else’s suffering as a daily experience. While these experiences define what it is to be therapeutically present, there are ways AI can make the tough work we do more sustainable.
First is the issue of productivity. Most experts in the field agree that up to 80% of clinician burnout is secondary to systems issues, whereas only 20% is related to personal factors. While much attention has focused on individuals and their wellness, there remains no argument that the complexity of being a clinician extends far beyond face-to-face time with patients and direct clinical care. Clinician productivity, and overall satisfaction with work, could be measurably increased if administrative tasks are offloaded to AI assistants. Tasks related to patient messaging, clinical documentation, email response, prescription refills, and clinical decision support have already been made easier through AI. Using such digital assistants would measurably free up clinicians to be present with the patient.
Next is the issue of scalability. For some, interactions with an online digital agent are commonplace, from changing airline reservations to ordering groceries. Imagine how the breadth of palliative care assessments and activities could expand if palliative care digital agents could converse with patients and caregivers about their hopes, worries, and plans? For example, an AI-powered goals-of-care chatbot could initiate an advance care plan and associated documents using the results of a two-way text or audio-based conversation. The clinical team could follow up on the drafted plan to further explore context and nuance while providing compassionate commentary. In this interplay, AI begins the process and humans insert themselves when it’s time for their highest and best use.
Lastly is, admittedly, a coolness factor. Palliative care has always been a bit counterculture, zigging when others zag. Early in my career, in explaining prognostication—the art and science of interpersonal communication and negotiation—and the boundary-pushing culture of symptom management, learners in my palliative care clinic would react with wide eyes and a sense of wonder. We professionalize compassion, taking something that seems obvious and easy and delivering it with advanced skillfulness, meticulous precision, and heart. We also embraced tools and aids long before others and in new and expanded ways. Why not be at the forefront of AI and health care, both to facilitate its use to scale compassion and to ensure its use always puts people first?
Our Chance to Lean in and Shape AI
If there’s one thing for certain, AI is here to stay. I encourage you to learn with an open mind and to lean into opportunities that could impact our day to day with colleagues and patients. If there is an AI committee or task force at your workplace, get a seat at the table. If other departments are leading AI, they won’t be thinking of—or fully understand—how it relates to palliative care. We need representation in these rooms. This is our chance to use our voices, our expertise, and our compassion to shape AI for good.