08/13/2025 / By Cassie B.
Imagine trusting an AI chatbot to guide your diet, only to end up hospitalized, hallucinating, and strapped to a psychiatric bed. That’s exactly what happened to a 60-year-old man who blindly followed ChatGPT’s reckless advice, swapping table salt for a toxic industrial chemical. His harrowing ordeal, documented in Annals of Internal Medicine, exposes the dangers of relying on artificial intelligence for health decisions, especially when corporations like OpenAI refuse to take full responsibility for their flawed algorithms.
The man, unnamed in medical reports, was no stranger to nutrition. After reading about the supposed dangers of chloride in table salt, he turned to ChatGPT for alternatives. The AI casually suggested sodium bromide, a compound once used in sedatives but now restricted due to its neurotoxicity. Without a second thought, the man bought the chemical online and consumed it daily for three months.
By the time he staggered into the hospital, he was convinced his neighbor was poisoning him. Paranoia consumed him. He refused water, hallucinated voices, and even tried to escape medical care. Doctors diagnosed him with bromism, a rare poisoning syndrome that ravaged his nervous system. “He had no prior psychiatric history,” researchers noted, yet his symptoms mirrored severe psychosis.
Bromism isn’t new. In the early 1900s, bromide-laced sedatives flooded pharmacies, accounting for nearly 10% of psychiatric admissions. The FDA cracked down by the 1970s, but this case proves corporate negligence, whether it’s from Big Pharma or Big Tech, still puts lives at risk.
When the man’s doctors tested ChatGPT 3.5, they got the same dangerous reply: “You can often substitute [chloride] with other halide ions such as sodium bromide.” No warnings. No context. Just a digital shrug. As the study authors wrote, “It is highly unlikely that a medical expert would have mentioned sodium bromide” as a salt alternative.
OpenAI’s response? A robotic deflection to its terms of service, which vaguely state ChatGPT isn’t for medical use. Yet the company’s CEO, Sam Altman, boasts that GPT-5 is “the best model ever for health.” Tell that to the man who lost weeks of his life to AI-induced delirium.
The case exposes a very disturbing truth: Tech giants prioritize profit over safety, releasing half-baked AI tools that hallucinate answers, sometimes with lethal consequences. As the researchers warned, “It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”
This isn’t just about one man’s mistake. It’s about the erosion of personal responsibility in an age where algorithms replace critical thinking. ChatGPT isn’t a doctor; it’s a glorified autocomplete tool. Yet millions trust it blindly, lured by Silicon Valley’s hype.
The solution? People need to use more common sense. AI might help research, but it should never override professional advice. The victim eventually recovered, but his story is a warning: In a world drowning in AI propaganda, your health is your responsibility. Don’t let a chatbot steal it.
Sources for this article include:
Tagged Under:
AI, Big Tech, bromism, ChatGPT, computing, cyberwar, Dangerous, future tech, Glitch, medical advice, poisoning, robotics, salt substitutes
This article may contain statements that reflect the opinion of the author