Go back

Stanford's AMIE AI Wins 47% of Cardiology Cases Over Top Doctors

Stanford's AMIE AI Wins 47% of Cardiology Cases Over Top Doctors

Gemini-powered AMIE halved clinical errors and beat unaided cardiologists 47% vs 33% in RCT—healthcare AI just went clinical.

What if an AI could spot genetic cardiomyopathies from ECGs better than your cardiologist? RCT says yes.

Stanford and Google ran a randomized trial: 9 cardiologists handled 107 complex cases with/without AMIE (Gemini 2.0 Flash-based). AMIE crunched ECGs, echos, MRIs for diagnoses/plans. Blinded subspecialists preferred AI-assisted 46.7% vs. 32.7% unaided (21% ties), slashing major errors from 13.1% to ~half.[2]

This matters for med-tech devs: proves supervised LLMs boost pros in high-stakes domains. Builds on OpenScholar’s research wins—specialization > generalization. Expect wet-lab/dry-lab integrations like Nvidia/Lilly’s $1B setup.[2]

AMIE edges Gemini’s multimodal edge over text-only rivals. Vs. generalists like GPT/Claude, domain tuning shines (echoes DeepSeek clinical beats).[2][1] Leaderboard fragility noted elsewhere, but RCT gold standard validates.[3]

Not public yet—watch Google Health. Fork similar on Hugging Face for your vertical? Or fork for open alternatives like DeepSeek. Clinical AI: tool or takeover?

Source: Nathan Benaich Substack


Share this post on:

Previous Post
DeepSeek V4: 1T-Param Coding Beast That Runs on Your Dual 4090s
Next Post
Open-Source Just Crushed GPT and Claude on PhD-Level Science Reviews

Related Posts

Comments

Share your thoughts using your GitHub account.