
An attempt by a 60 -year -old man eats healthier salt cut from his diet has changed dangerously after the advice from Chatgpta followed. His decision eventually led to a hospital and diagnosis of a rare and potentially life -threatening state called bromism.
The incident raised new concerns about relying on AI tools, such as Chatgpt for medical guidance, especially without advice on healthcare workers. The case has recently been described in detail in a report published in American College of Physicians magazines.
According to the report, the man asked Chatgpta how to eliminate sodium chloride (commonly known as table salt) from his diet. In response, it was replaced by a sodium bromide, which was once commonly used in medicines at the beginning of the 20th century, but it is now known to be toxic in large quantities. Reportedly he used sodium bromide for three months, obtained online, based on what he read from AI Chatbot.
A man who had no previous history of psychiatric or physical health was admitted to the hospital after he had experienced hallucinations, paranoia and severe thirst. During his initial 24 hours, he showed signs of confusion and refused water and suspected it was dangerous.
The doctors soon diagnosed him with bromide toxicity, a condition that is now very rare, but was again common when the bromide was used to treat anxiety, insomnia and other conditions. Symptoms include neurological disorders, skin problems such as acne and red skin stains known as cherry angioms-all that the man exposed.
“Inspired by his past studies in nutrition, he decided to carry out a personal experiment to remove chloride from his diet,” the report said. He told doctors that he had seen bromide at Chatgpta that bromide could be used instead of chloride, although the source seemed to be more of an industrial than dietary use.
After three weeks of treatment involving fluids and electrolyte balance, the man was stabilized and released from the hospital.
The authors of the case studies warned against the growing risk of misinformation from AI: “It is important to take into account that Chatgpt and other AI systems can generate scientific inaccuracies, cannot critically discuss the results and eventually support disinformation.”
Openai, the developer of Chatgpt, recognizes this in its conditions of use and states: “You should not rely on the output of our services as the only source of truth or factual information or compensation for expert advice.”
The conditions further clarify: “Our services are not intended for use in diagnosis or treatment of any health condition.”
The alarming case contributes to the ongoing global conversation about the restrictions and duties around the council generated by the and, especially in matters concerning physical and mental health.
(Tagstotranslate) Openai