AI CEO reveals why it can be dangerous for health advice



Perhaps it’s not worth its salt when it comes to health advice.

A stunning medical case report published last month revealed that a 60-year-old man with no history of psychiatric or health conditions was hospitalized with paranoid psychosis and bromide poisoning after following ChatGPT’s advice.

🎬 Get Free Netflix Logins

Claim your free working Netflix accounts for streaming in HD! Limited slots available for active users only.

  • No subscription required
  • Works on mobile, PC & smart TV
  • Updated login details daily
🎁 Get Netflix Login Now

The unidentified man was interested in cutting sodium chloride (table salt) from his diet. He ended up substituting sodium bromide, a toxic compound, for three months upon consultation with the AI chatbot. Bromine can replace chlorine — for cleaning and sanitation, not for human consumption.

Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, reveals all the ways that AI can go wrong in providing medical advice to a user. Courtesy of Pearl.com

“[It was] exactly the kind of error a licensed healthcare provider’s oversight would have prevented,” Andy Kurtzig, CEO of the AI-powered search engine Pearl.com, told The Post. “[That] case shows just how dangerous AI health advice can be.”

In a recent Pearl.com survey, 37% of respondents reported that their trust in doctors has declined over the past year.

Suspicion of doctors and hospitals isn’t new — but it has intensified in recent years thanks to conflicting pandemic guidance, concerns over financial motives, poor quality of care and discrimination.

Skeptics are turning to AI, with 23% believing AI’s medical advice over a doctor.

That worries Kurtzig. The AI CEO believes AI can be useful — but it does not and cannot substitute for the judgment, ethical accountability or lived experience of medical professionals.

Mistrust in the healthcare community has increased significantly since the start of the COVID-19 pandemic. Valeria Venezia – stock.adobe.com

“Keeping humans in the loop isn’t optional — it’s the safeguard that protects lives.” he said.

Indeed, 22% of the Pearl.com survey takers admitted they followed health guidance later proven wrong.

There are several ways that AI can go awry.

Mount Sinai study from August found that popular AI chatbots are very vulnerable to repeating and even expanding on false medical information, a phenomenon known as “hallucination.”

“Our internal studies reveal that 70% of AI companies include a disclaimer to consult a doctor because they know how common medical hallucinations are,” Kurtzig said.

“At the same time, 29% of users rarely double-check the advice given by AI,” he continued. “That gap kills trust, and it could cost lives.”

Kurtzig noted that AI could misinterpret symptoms or miss signs of a serious condition, leading to unnecessary alarm or a false sense of reassurance. Either way, proper care could be delayed.

In a recent Pearl.com survey, 23% of respondents reported believing AI’s medical advice over a doctor. Richman Photo – stock.adobe.com

“AI also carries bias,” Kurtzig said.

Studies show it describes men’s symptoms in more severe terms while downplaying women’s, exactly the kind of disparity that has kept women waiting years for diagnoses of endometriosis or PCOS,” he added. “Instead of fixing the gap, AI risks hard-wiring it in.”

And finally, Kurtzig said AI can be “downright dangerous” when it comes to mental health.

Experts warn that using AI for mental health support poses significant risks, especially for vulnerable people.

AI has been shown in some situations to provide harmful responses and reinforce unhealthy thoughts. That’s why it’s important to use AI thoughtfully.

Pearl.com (shown here) has human experts verify AI-generated medical responses.

Kurtzig suggests having it help frame questions about symptoms, research and widespread wellness trends for your next appointment — and leaving the diagnosis and treatment options to the doctor.

He also highlighted his own service, Pearl.com, which has human experts verify AI-generated medical responses.

“With 30% of Americans reporting they cannot reach emergency medical services within a 15-minute drive from where they live,” Kurtzig said, “this is a great way to make professional medical expertise more accessible without the risk.”

When The Post asked Pearl.com if sodium bromide could replace sodium chloride in someone’s diet, the response was: “I absolutely would not recommend replacing sodium chloride (table salt) with sodium bromide in your diet. This would be dangerous for several important reasons…”


Let’s be honest—no matter how stressful the day gets, a good viral video can instantly lift your mood. Whether it’s a funny pet doing something silly, a heartwarming moment between strangers, or a wild dance challenge, viral videos are what keep the internet fun and alive.

Leave a Reply

Your email address will not be published. Required fields are marked *

Adblock Detected

  • Please deactivate your VPN or ad-blocking software to continue