Yale study shows how AI bias worsens healthcare disparities

A new research report from Yale School of Medicine offers an up-close look at how biased artificial intelligence can affect clinical outcomes. The study focuses specifically on the different stages of AI model development, and shows how data integrity issues can impact health equity and care quality.

WHY IT MATTERS

Published earlier this month in PLOS Digital Health, the research gives both real-world and hypothetical illustrations of how AI bias impacts adversely affects healthcare delivery – not just at the point of care, but at every stage of medical AI development: training data, model development, publication, and implementation and more.

"Bias in; bias out," said the study’s senior author, John Onofrey, assistant professor of radiology & biomedical imaging and of urology at Yale School of Medicine, in a press statement.

"Having worked in the machine learning/AI field for many years now, the idea that bias exists in algorithms is not surprising," he said. "However, listing all the potential ways bias can enter the AI learning process is incredible. This makes bias mitigation seem like a daunting task."

As the study notes, bias can crop up amost anywhere in the algorithm development pipeline.

It can occur in "data features and labels, model development and evaluation, deployment, and publication," researchers say. "Insufficient sample sizes for certain patient groups can result in suboptimal performance, algorithm underestimation, and clinically unmeaningful predictions. Missing patient findings can also produce biased model behavior, including capturable but nonrandomly missing data, such as diagnosis codes, and data that is not usually or not easily captured, such as social determinants of health."

Meanwhile, "expertly annotated labels used to train supervised learning models may reflect implicit cognitive biases or substandard care practices. Overreliance on performance metrics during model development may obscure bias and diminish a model’s clinical utility. When applied to data outside the training cohort, model performance can deteriorate from previous validation and can do so differentially across subgroups." 

And, of course, the manner with which clinical end users interact with AI models can also introduce bias of its own.

Ultimately, "here AI models are "developed and published, and by whom, impacts the trajectories and priorities of future medical AI development," Yale researchers say.

And they note that any efforts to mitigate that bias – "collection of large and diverse data sets, statistical debiasing methods, thorough model evaluation, emphasis on model interpretability, and standardized bias reporting and transparency requirements" – must be implemented carefully, with a keen eye for how those guardrails will work to prevent adverse effecs on patient care.

"Prior to real-world implementation in clinical settings, rigorous validation through clinical trials is critical to demonstrate unbiased application," they said. "Addressing biases across model development stages is crucial for ensuring all patients benefit equitably from the future of medical AI."

But the report, "Bias in medical AI: Implications for clinical decision-making," offers some suggestions for mitigating that bias, toward the goal of improving health equity.

For instance, previous research has found that using race as a factor in estimating kidney function can sometimes lead to longer wait times for Black transplants to get onto transplant lists. Yale researchers offer several recommendations to help future AI algorithms use more precise measures, such as ZIP code and other socioeconomic factors.

ON THE RECORD

"Greater capture and use of social determinants of health in medical AI models for clinical risk prediction will be paramount," said James L. Cross, a first-year medical student at Yale School of Medicine and the study’s first author, in a statement.

"Bias is a human problem," added Yale associate professor adjunct of radiology & biomedical imaging and study coauthor Dr. Michael Choma. "When we talk about ‘bias in AI,’we must remember that computers learn from us."

Mike Miliard is executive editor of Healthcare IT News

Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.

{Categories} _Category: Implications{/Categories}
{URL}https://www.healthcareitnews.com/news/yale-study-shows-how-ai-bias-worsens-healthcare-disparities{/URL}
{Author}mike.miliard@medtechmedia.com; Twitter: @MikeMiliardHITN (Mike Miliard){/Author}
{Image}https://www.healthcareitnews.com/sites/hitn/files/web_xl-GettyImages-1227349884_0_0.jpg{/Image}
{Keywords}AI, Quality Care, Patient safety, Analytics, Clinical and business intelligence, Predictive analytics, Care, Data and Information{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}

Exit mobile version