Report: Google’s Medical AI Hallucinated a Nonexistent Part of the Brain

Was the Med-Gemini error a simple typo? Or something more concerning?

A grid of 20 brains, with 19 brown and 1 purple, on a green background
AI is already being used in healthcare. Are doctors ready?
Getty

In 2024, Google announced a new iteration of its AI tools called Med-Gemini. As its name suggests, this was a version of its Gemini AI software designed to operate in a medical context. In an introduction to these new AI models, Google’s Greg Corrado and Joëlle Barral wrote that it “demonstrates powerful multimodal capabilities that have the potential to assist in clinician, researcher, and patient workflows.”

That would be the good news. The bad news? According to reporting by Hayden Field at The Verge, one of the initial research papers where Google demonstrated the capacities of Med-Gemini featured a reference to the “basilar ganglia,” a part of the brain that does not exist. As Field points out, this invented body part is a combination of the basal ganglia and basilar artery, two very different parts of the brain

Neurologist Bryan Moore told The Verge that he spotted the error and notified Google, who then edited the text in their original announcement with no initial public statement. Google has subsequently added a caption as part of the blog, clarifying, “Note that ‘basilar’ is a common mis-transcription of ‘basal’ that Med-Gemini has learned from the training data, though the meaning of the report is unchanged.” (Though, as Field notes, Google’s research paper “still contains the error with no updates or acknowledgement.”)

Researchers Make the Case for More Regulation of Medical AI
A newly published paper raises challenging questions

Much of the significance of this finding depends on how you classify the error of “basilar ganglia.” If it’s nothing more than a typo, as Google suggests, there’s less call for concern. If it’s a hallucination, it’s a more significant problem, as one of the experts The Verge spoke with confirmed.

The Verge’s report suggests that medical AI is not immune to the kind of hallucinations that have been found elsewhere in the industry. Despite these concerns, AI does seem to be gaining ground as a trusted source for medical information. A recent survey from Annenberg Public Policy Center found that 63% of respondents considered AI-generated health information to be “somewhat reliable” or “very reliable.”

Meet your guide

Tobias Carroll

Tobias Carroll

Tobias Carroll lives and writes in New York City, and has been covering a wide variety of subjects — including (but not limited to) books, soccer and drinks — for many years. His writing has been published by the likes of the Los Angeles Times, Pitchfork, Literary Hub, Vulture, Punch, the New York Times and Men’s Journal. At InsideHook, he has…
More from Tobias Carroll »

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.