For nearly 20 years, the worldwide healthcare safety organization ECRI has published its list of Top 10 Health Technology Hazards. This year’s edition is topped by a fast-changing technology with huge promise for healthcare, but no shortage of downside risk: artificial intelligence.
WHY IT MATTERS
"While AI has the potential to improve efficiency and outcomes, it poses significant risks to patients if not properly assessed and managed," said ECRI in announcing the 2025 report.
"AI has expanded from its early applications in medical imaging to influence virtually every area of healthcare including diagnosis, documentation and appointment scheduling. Even AI applications involving ancillary systems, which are not regulated as medical devices, can have a profound impact on patient care," according to ECRI.
Of particular concern, of course are AI hallucinations, or other misleading feedback from improperly calibrated algorithms. Machine learning model output – especially when those models are trained on biased data – can vary widely across patient populations, researchers note, posing risks to health equity and patient safety for underrepresented or underserved communities.
ECRI’s Top 10 Health Technology Hazards for 2025:
Risks with AI-enabled health technologies
Unmet technology support needs for home care patients
Vulnerable technology vendors and cybersecurity threats
Substandard or fraudulent medical devices and supplies
Fire risk from supplemental oxygen
Dangerously low default alarm limits on anesthesia units
Mishandled temporary holds on medication orders
Poorly managed infusion lines
Harmful medical adhesive products
Incomplete investigations of infusion system incidents
ECRI defines a health tech hazard as any "device or system fault, design feature or method of use that might, under certain circumstances, place patients or users at risk."
The group takes what it calls a "Total Systems Approach to Safety," aiming to help healthcare professionals, administrators, device manufacturers, policymakers, researchers, and patients themselves reduce incidents of preventable harm during care delivery.
Its teams focus on various aspects – human factors, engineering, device safety, medication safety, infection control – that impact and stem from technology deployments in healthcare settings, and work to recommend system-wide safety solutions.
ECRI notes that the topics and technologies listed each year "are not necessarily the most frequently reported problems or the ones associated with the most severe consequences," although those factors are a core consideration.
Instead, the annual report reflects its judgment about "which risks should be given attention now to help care providers, device manufacturers, and others prioritize their patient safety efforts."
The full Top 10 Health Technology Hazards report, for ECRI members, offers detailed steps health systems, vendors and other IT leaders take to reduce risks to patient safety. Access an executive brief here.
ON THE RECORD
"The promise of artificial intelligence’s capabilities must not distract us from its risks or its ability to harm patients and providers," said Dr. Marcus Schabacker, president and CEO of ECRI, in a statement. "Balancing innovation in AI with privacy and safety will be one of the most difficult, and most defining, endeavors of modern medicine.
"AI is only as good as the data it is given and the guardrails that govern its use," he added. "Healthcare stakeholders at all levels must think critically about the integration of AI, as they would with any new technology."
Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.
{Categories} _Category: Implications{/Categories}
{URL}https://www.healthcareitnews.com/news/new-risk-atop-ecris-annual-health-tech-hazards-list-ai{/URL}
{Author}mike.miliard@medtechmedia.com; Twitter: @MikeMiliardHITN (Mike Miliard){/Author}
{Image}https://www.healthcareitnews.com/sites/hitn/files/AI-HITN_0_1_1_1.png{/Image}
{Keywords}{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}