Why you should never use Chatppt for health advice
:max_bytes(150000):strip_icc()/VWH-GettyImages-2197495847-bdf26284a2c446418b5186f45dbc74be.jpg?w=780&resize=780,470&ssl=1)
Main to remember
- The health advice generated by AI can be dangerous and should never replace the advice of an authorized health professional.
- AI chatbots can provide obsolete, deceptive or too generic health information.
- Experts recommend using AI tools only for basic general knowledge and discussing any health advice from AI with a doctor.
A 60 -year -old man replaced table salt with sodium bromide after consulting Chatgpt, a switch that led to the toxicity of bromide and a three -week psychiatric hospitalization.
The case highlights the potential dangers to rely on AI chatbots for health advice. However, most Americans believe that the health information generated by AI is “somewhat reliable,” said a recent survey. Experts warn that AI tools should never replace professional medical care.
AI chatbots do not have your medical records
IA chatbots do not have your personal health files, so they cannot give reliable advice on new symptoms, an existing condition that you have or if you need emergency care.
The health advice of a chatbot is also very generic, said Margaret Lozovatsky, MD, vice-president of digital health innovations at the American Medical Association.
The best use of AI for the moment, she said, is the basic information to help you ask your doctor questions or explain the medical terms you don’t know.
AI information can be obsolete or inaccurate
The generative AI is based on the data on which it has been formed, which may not reflect the most recent medical guidelines. For example, Centers for Disease Control (CDC) only recently recommended the updated flu vaccine for everyone by 6 months and more, and some chatbots may not be up to par.
Even when an IA chatbot is false, it may seem confident and convincing. AI systems can break information together to fill the gaps and spit false or misleading responses.
A study published in the journal Nutrients have found that popular chatbots such as Gemini, the Microsoft Copilot and Chatgpt can generate decent weight loss meal plans, but they fail to balance macronutrients, including carbohydrates, proteins, fats and fatty acids.
“I would be extremely reluctant to tell a patient to do something based on Chatgpt,” said Ainsley Maclean, MD, AI health consultant and former AI chief for Midatlantic Kaiser Permanent Medical Group.
Is there a safe way to use health AI tools?
MacLean noted that generative AI robots are not covered by privacy protections in terms of health such as HIPAA at the moment. “Do not enter your personal health information,” she said. “It could end anywhere.”
When you browse AI summaries on Google, it is best to check whether the information comes from a well -known scientific review or medical organization. In addition, check the date of the information to see when it is the last update.
Lozovatsky said that she hoped that people will always visit their doctors if they have new symptoms and will be frank on the information found through a chatbot and any action they have taken.
She added that it is absolutely reasonable to share the information of the AI with your doctor and to ask questions: “Is it correct? Is this applied to my case? And if not, why not?” You can also ask your doctor if there is an AI health tool that he trusts.
:max_bytes(150000):strip_icc()/VWH-GettyImages-1447262867-3d0163bd7db840d7a6c81d5f8a444b41.jpg?w=390&resize=390,220&ssl=1)



