Connect with us

Health

Man Hospitalized After Following AI Advice on Salt Substitute

Editorial

Published

on

A 60-year-old man ended up in the hospital after seeking advice from ChatGPT on how to replace table salt. The case, detailed in a report published in the *Annals of Internal Medicine*, highlights the potential dangers of relying on artificial intelligence for medical guidance. The man experienced hallucinations and paranoia, leading to an emergency room visit at an unspecified hospital.

Upon his admission, the patient expressed concerns that his neighbor might be poisoning him. Within the first 24 hours, his condition worsened, resulting in more intense paranoia and visual and auditory hallucinations, prompting psychiatric care. After stabilizing his symptoms, the man disclosed that he had been exploring the health risks associated with sodium chloride, commonly known as table salt.

Instead of merely reducing sodium intake, he sought a complete elimination of chloride from his diet. He turned to ChatGPT for advice on suitable substitutes. The AI recommended sodium bromide, a chemical typically used in water treatment and film photography, but not intended for human consumption.

For the next three months, the man replaced sodium chloride with sodium bromide, which he acquired online based on his interactions with the AI. The case report notes that while ChatGPT indicated that chloride could be substituted with bromide, it failed to provide a health warning or inquire about the man’s intentions, which a medical professional would typically do.

Bromide ingestion can lead to a condition known as bromism, characterized by symptoms such as hallucinations, fatigue, and insomnia. The man’s blood tests revealed bromide levels at an alarming 1,700 mg/L, far exceeding the normal range of 0.9 to 7.3 mg/L. This condition was rarely reported after the U.S. Food and Drug Administration (FDA) restricted bromide use in the 1980s, but has resurfaced with the sale of unregulated dietary supplements containing bromide.

Risks of AI in Medical Advice

The authors of the case report, doctors from the University of Washington, including Audrey Eichenberger, Stephen Thielke, and Adam Van Buskirk, caution against using AI tools for medical queries. They emphasize that AI can perpetuate misinformation and lacks the contextual understanding that healthcare professionals possess.

“AI systems can generate scientific inaccuracies and lack the ability to critically discuss results,” the authors stated. They also noted that while AI can bridge the gap between scientific knowledge and the public, it poses risks when users receive information out of context.

In response to concerns raised by this incident, OpenAI, the organization behind ChatGPT, has reiterated that their AI is not meant for health advice. The company has implemented measures to guide users towards consulting healthcare professionals. “Our terms state that ChatGPT is not intended for use in the treatment of any health condition,” OpenAI stated. They also mentioned ongoing efforts to enhance the safety of health-related queries.

The case serves as a reminder of the potential pitfalls associated with using AI for health information. As reliance on such technology grows, it becomes increasingly important for healthcare providers to be aware of where their patients are obtaining health-related advice.

The man received treatment in the hospital for three weeks and was stable during a follow-up visit two weeks after discharge. While cases of bromism may remain rare, the availability of bromide through online sources highlights the need for caution and informed decision-making regarding health and dietary changes.

Continue Reading

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.