Microsoft And Federal Agencies Launch Nonprofit Supergroup To Wrangle Health AI’s Wild West

Brian Anderson, executive director, Coalition for Health AI.
Coalition for Health AIAs Microsoft’s chief scientific officer, Eric Horvitz spends a lot of time thinking about how to balance the risks and benefits of new technologies. Explosive advancements in artificial intelligence have made this even more imperative, especially in life or death sectors like healthcare. “The stakes are so high,” Horvitz, who is also a doctor, told Forbes. “We have this combination of incredible excitement and then also mature caution.”

That’s why Microsoft has joined a nonprofit supergroup composed of private companies and public entities that aims to figure out how AI can be deployed in healthcare in a way that benefits and protects patients.

AI represents a tremendous opportunity to help patients get better care faster, but there’s also significant risk if an AI model makes up information or suggests wrong answers because it hasn’t been trained on patients of different genders, ages, races or ethnicities. The problem is the current regulatory structure is piecemeal. Electronic health records companies follow different rules from medical device makers. AI tools developed in-house at hospitals are in a gray area and insurers fall under yet a different agency.

But the one thing all the different players agree on is that there needs to be a better structure around standardizing, testing, validating and tracking the use of AI in healthcare — and that’s where the Coalition for Health AI (or CHAI for short) comes in.

On Monday, the newly incorporated nonprofit announced its board of directors, which includes Horvitz, and representatives from hospitals, startups, academia, venture capital, patient groups and the federal government. “We can’t over regulate in some ways and suppress innovation. We have to also protect patients and inform physicians,” said Horvitz. “CHAI will help with a balanced path forward.”

CHAI’s goal is to be something of a standards organization to certify health AI tools: in its first year, the group plans to begin establishing standards, testing metrics, a network of health AI assurance labs and a national registry of validated health AI tools.

“If you’re going to have credibility in AI, what you need is transparency,” John Halamka, CHAI’s board chair and president of the Mayo Clinic Platform, told Forbes. “You need the notion of repeatability. You’re going to get a good answer today, a good answer tomorrow. It’s got to be consistent.”

The group, which started as an all-volunteer organization in 2021, felt a sense of urgency to incorporate following President Biden’s AI executive order in October 2023, which specified the crucial need for safety in healthcare applications. It registered as a nonprofit membership organization known as a 501(c)6 in January 2024. Mayo Clinic, Stanford Health Care, John Hopkins and Duke Health split the initial legal fees, which totaled around $100,000, Halamka told Forbes.

So far more than 1,300 organizations have joined CHAI since 2021, and the group will implement a membership dues structure starting this year. Halamka estimates the year one budget to be around $1 million, the cost of which will be split by around two dozen founding member organizations.

Halamka said CHAI does “not plan to be a lobbying organization.” The goal is to bring industry and government together to create best practices for testing and deploying health AI models, but whatever framework CHAI establishes will ultimately be voluntary. Drafting and finalizing federal regulations often takes years, so CHAI is essentially filling the void. “As we look to 2025, this could be something more formally instantiated and regulated,” said Halamka. “But I don’t think 2024 is going to have a lot of regulation around this.”
“We need to have a consistent vocabulary of what constitutes responsible AI.”
Micky Tripathi, National Coordinator For Health IT
CHAI also announced partnerships with the National Council on Health, which represents 160 million patients, and HL7, a national health information technology standards organization. Microsoft represents Big Tech, while Bessemer vice president Morgan Cheatham represents venture capital and the startup ecosystem. There are two federal liaisons on the CHAI board: Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence, and Micky Tripathi, the national coordinator for health information technology in the Department of Health and Human Services.

Both Tazbaz and Tripathi signed on to the idea of creating a national network of health assurance labs to test and validate health AI, along with a national registry, in an article published in JAMA at the end of 2023.

“We need to have a consistent vocabulary of what constitutes responsible AI,” Tripathi told Forbes. He theorized about the two most extreme directions: the first a world in which the government regulates everything through a central database, and the second a total laissez faire model where industry does whatever it wants. “The right point is somewhere in the middle. You want to be able to have public-private consensus around this stuff.”

To that end, CHAI also announced a government advisory board with several officials from the Biden Administration. That includes Jonathan Blum, principal deputy administrator of the Center for Medicare and Medicaid Services; Gil Alterovitz, chief AI officer of the Veterans Health Administration; and Susan Coller Monarez, deputy director of the Advanced Research Projects Agency for Health.

Brian Anderson, former chief digital health physician at Mitre, a nonprofit company that consults for the government on research and development projects, will lead CHAI’s day-to-day operations as executive director. “We’re intentionally trying to make this an effort that can be as inclusive as possible,” Anderson told Forbes. “Anybody that wants to participate will have the ability to contribute their voice to it. This is not going to be a pay-to-play sort of effort.”

Starting in late March, CHAI will create working groups that will develop standards and testing and evaluation frameworks, which he hopes will publish by early fall. Simultaneously, health systems will establish assurance labs in the third and fourth quarters. Some of these labs already exist, such as at Mayo Clinic, but they will each be asked to get the equivalent of a CHAI “seal of approval” through a certification process. “I’m anticipating north of 30 or so health systems that will be part of this broader network of networks,” said Anderson. Participation is voluntary, and CHAI has no enforcement power, but Anderson said he has already heard from many healthcare organizations interested in spinning up these labs.

The idea is that health AI developers will bring their models to these labs to be validated as fair and accurate across different types of patients. The hope is that developers will want to get their models validated in several different health systems across the country, which could help ensure the models aren’t biased. Just because an AI model works on white patients from the Mayo Clinic in suburban Minnesota, it may not work the same way on Black patients from Duke Health in parts of rural North Carolina.

All the information related to the testing of models in CHAI labs will be available in a public registry so anyone can understand how the models perform, Anderson said: “All that will lead to greater trustworthiness and dependability in AI.”

{Categories} _Category: Applications,*ALL*{/Categories}
{Author}Katie Jennings, Forbes Staff{/Author}
{Keywords}Healthcare,/healthcare,Innovation,/innovation,Healthcare,/healthcare,Premium Content,premiumcontent,Editors’ Pick,editors-pick,premium{/Keywords}
{Source}Forbes – Innovation{/Source}

Exit mobile version