AI in Healthcare: Are We Expecting Too Much?

By Lindsey Jarrett, PhD
Vice President for Ethical AI, Center for Practical Bioethics

Dr. Jarrett was the keynote speaker at the University of Missouri Data Science Week in Columbia with Dr. Timothy Haithcoat.
Dr. Jarrett was the keynote speaker at the University of Missouri Data Science Week in Columbia with Dr. Timothy Haithcoat.

This year and this month marks five years since the onset of the COVID pandemic and 10 years since I entered my career in digital health. As I reflect on this past decade, I think about all the problems I’ve tried to solve, and all the problems that still exist today – waiting to be solved.

I came into technology a bit by accident in 2015, as I was on my way out of my doctoral program. I knew I needed something different than a traditional academic approach and I was excited about all the new things that scientists would be able to use, and how those tools might change the pace of discovery at a rapid rate.

People were increasing their use of smartphones, no longer requiring their computers to make a strong connection to the internet. This opened a whole new world with app development and quicker communication.

We also were living in the world of “big data,” with huge datasets to play with and hopes of making broader and better decisions. The healthcare technology space, now more commonly known as the digital health industry, was learning from companies like IBM, Netflix, Amazon and Google how to develop the best predictive tools with the most amount of data possible.

However, amidst the excitement, data and technology regulations loomed on the horizon. Regardless of what regulations may make their way into the intersection of healthcare and technology, there was a desire across the globe to have all our fancy new devices in our pockets, in our homes, on our wrists to connect, to share information, and inevitably for healthcare companies to use to understand you better. It was an incredible time to enter tech and to grow as a clinical researcher.

An Ethical Dilemma

The digital health industry has grown significantly over the last 10 years, and society has been on a path of some level of acceptance of technology in healthcare. At the same time, there has been warranted hesitancy, often rooted in privacy concerns and lack of trust in machines used for human decision making. These concerns have let us know that there are risks, and maybe we should take time to consider them.

When we talk about risk, we may think about what could go wrong – what happens if something goes wrong? Will it hurt someone? Will it take something from someone who doesn’t deserve it? Will we lose money by taking this chance?Highlighted text from the blog.

These are all valid human reactions, especially to innovation; depending on where you sit, you may evaluate the risk differently, see its benefits differently, and determine how little or how much risk you are willing to take.

This is what it feels like to be in the middle of an ethical dilemma, a dilemma that asks (and often forces) us to evaluate our desire for innovation in hopes of a better life versus our fear of giving technology too much power. This dilemma is not new to healthcare, as the industry has leveraged many past technological advances, such as with medical devices, pharmaceuticals, pharmacogenomics and robotics. These advances have shown promise but have not been without their pitfalls, which now provide us with a baseline of how we might think about the development and adoption of AI-enabled tools.

Synthesize and Summarize

Recently I read an essay in The New York Times: “The Robot Doctor Will See You Now,” by Pranav Rajpurkar and Eric J. Topol. This piece categorized AI as a positive tool, one in which AI has the potential to cause more benefit than harm. But I’m not sure the authors knew, from the first to last word, that they had proven my point.

I was connected to their idea that we can “find a role [for AI] that doctors can trust,” but confused by the argument that these tools could lessen workload burden for providers, offering promise “for underserved areas.” The authors promoted awareness of regulation, clinical education and liabilities, while also advocating for the promise of “fewer bottlenecks, shorter waits, and potentially better outcomes.” These concerns and promises surrounding AI are not linear, not black and white, and probably not solved with the types of tools we deploy in healthcare today.

Dr. Jarrett was the keynote speaker at the University of Missouri Data Science Week in Columbia with Dr. Timothy Haithcoat.
Dr. Jarrett was the keynote speaker at the University of Missouri Data Science Week in Columbia with Dr. Timothy Haithcoat.

We see success in radiology, in tools that help us synthesize and summarize information, like with ChatGPT. I understand this and do not disagree. However, taking that success and leaping forward to expect our current AI products – which are trained for identification and summarization – and having them predict diagnoses and connect to appropriate treatment options, the way a human does, is currently irresponsible.

Do I want this? Sure. I would love to bring back my friends who died from cancer before the age of 40 because they caught their cancer sooner and precision medicine worked. I would love for my dad to not have debilitating symptoms of Parkinson’s disease because we have used technology to create medicine that targets the disease more effectively. I would also like my mom to have received the accurate care to predict her heart disease, so she didn’t require an emergency quadruple bypass. All these things sit on someone’s sales pitch for the newest AI tool, but I argue that we are expecting too much from one thing.

Who Gets to Decide?

This is where I think my thoughts align with the authors. They briefly argue that an “effective approach is to let AI operate independently on a suitable task so that physicians can focus their expertise where it matters most.”

However, they leave out the most important piece: Who gets to decide? My work in this space begs this very question with everyone I speak with. Who gets to decide? This question has been at the center of digital health and technology development, especially over the last 10 years, and we have a chance to insert humans into that role. If the authors of this piece, and I argue, those interested in digital health advancement want people to trust the products, then the work we must do starts with the people.

This piece is among the hundreds that are written about the promise of AI each day, yet I’m sure it will catch the attention of those like me, who are sitting in this ethical dilemma every day, wanting so badly to see AI help people and not harm them, but knowing that it is not that simple and we must, as humans, keep doing the work to keep the risk low and the benefit high. Do the work. I know I will.

By Lindsey Jarrett, PhD

Verified by MonsterInsights