Key Highlights:
- Google’s Med-Gemini healthcare AI erroneously reported a nonexistent brain part—the “basilar ganglia”—in a 2024 research paper, blending “basal ganglia” and “basilar artery,” risking critical diagnostic confusion.
- The error was initially uncorrected in the primary research paper and quietly fixed only in an associated blog post after expert neurologist Bryan Moore exposed it, raising transparency and trust concerns.
- Experts warn this AI hallucination highlights inherent risks of generative medical AI, emphasizing the urgent need for rigorous human oversight, clearer error reporting, and continuous validation in clinical AI deployment.
AI Hallucination: The “Basilar Ganglia” Error
Google’s Med-Gemini suite, designed to summarize health data and generate radiology reports, mistakenly identified an “old left basilar ganglia infarct” in a published 2024 paper—an anatomical impossibility because the basilar ganglia does not exist. This term conflates two distinct brain structures: the basal ganglia (involved in motor control and learning) and the basilar artery (a key blood vessel). The mistake was overlooked by Google’s large team and remained in the research paper while the blog post was silently corrected upon notification by a neurologist.
Clinical and Industry Implications
Minor errors like this “typo,” though seemingly trivial, can have serious consequences in medical contexts by risking misdiagnosis or inappropriate treatment. Health professionals caution that AI’s confident but incorrect assertions (“hallucinations”) undermine clinical trust and patient safety. Such errors exemplify the limitations of current medical AI systems, which do not reliably flag uncertainty or “say I don’t know,” amplifying risks if clinicians rely uncritically on AI outputs.
Transparency and Accountability Issues
Google’s response characterized the error as a common transcription mistake learned from training data, yet the lack of public acknowledgment or correction in the original paper fuels expert criticism. This incident underscores the need for transparent error reporting and robust review protocols before AI diagnostic outputs are introduced in clinical practice, especially given the high stakes of healthcare decisions.
Towards Safer Medical AI: Oversight and Validation
The Med-Gemini case illustrates an urgent industry call for embedding thorough human oversight, continuous model validation, and error auditing within AI workflows. Experts advocate that AI should augment—not replace—clinical judgment, with clear safeguards against AI-generated misinformation. As medical AI systems become more widely adopted, addressing their hallucination risks is critical to secure safe, reliable benefits for patient care.
This incident serves as a pivotal reminder that despite AI’s transformative potential in healthcare, careful, transparent, and responsible development and deployment remain paramount.









