August 1, 2025

ChatGPT Nearly Killed This Patient With Bad Advice

ChatGPT sent a man to the hospital.The 60-year-old asked for a salt substitute recommendation. The AI suggested sodium bromide without any safety warnings. Three months later, he was in a psychiatric...

ChatGPT sent a man to the hospital.

The 60-year-old asked for a salt substitute recommendation. The AI suggested sodium bromide without any safety warnings. Three months later, he was in a psychiatric ward with severe bromide poisoning.

His bromide levels hit over 1,700 mg/L. That's more than 200 times the safe limit.

The symptoms were terrifying. Paranoia. Hallucinations. Complete psychosis. He needed three weeks of inpatient psychiatric care to recover from what doctors now call the first documented case of AI-linked bromism.

Here's what makes this case so dangerous.

The AI Reasoning Gap

Dr. Harvey Castro, an emergency medicine physician, explains the core problem: "Large language models generate text by predicting the most statistically likely sequence of words, not by fact-checking."

The AI essentially reasoned like this: "You want a salt alternative? Sodium bromide appears in chemistry as a sodium chloride replacement, so it scores highest here."

No consideration of toxicity. No medical context. No safety evaluation.

Just pattern matching that nearly killed someone.

The Vanishing Safety Net

The safety warnings are disappearing fast. Research shows AI health disclaimers dropped from 26% of responses in 2022 to fewer than 1% by 2025.

Meanwhile, 39% of Americans now trust AI tools like ChatGPT for healthcare decisions. Even more concerning: 56% can't distinguish true from false AI-generated health information.

That includes half of actual AI users.

What This Means for Chiropractic Patients

In my practice, I see patients every day who've tried AI-recommended exercises or treatments before seeking professional care. The difference now is AI's authoritative tone makes dangerous advice sound credible.

Just last week, a patient showed me an AI-generated spine alignment routine that could have worsened their disc herniation. The AI suggested aggressive twisting movements without knowing their specific condition or MRI results.

Sodium bromide was once common in psychiatric hospitals. It caused 5-10% of admissions in the early 20th century before the FDA systematically removed it from consumer products between 1975 and 1989.

A chiropractor would never recommend spinal manipulations without proper examination and imaging. The contextual knowledge gap between AI and hands-on healthcare is massive.

The Chiropractic Reality Check

AI can suggest stretches, but it can't feel muscle tension. It can recommend exercises, but it can't assess your range of motion or detect compensation patterns.

Every spine is different. What helps one patient's lower back pain might aggravate another's sciatica. That's why proper diagnosis and individualized treatment matter.

Before trying any AI-generated spinal health advice, get a professional assessment. Your spine deserves expertise that understands biomechanics, not just pattern matching.

That's not being old-fashioned. That's protecting your long-term mobility.