Study Shows Chatbots Mislead on Health

BMJ Open study found 50% of medical info from chatbots was problematic.
20% of responses were deemed highly problematic.
ChatGPT, Gemini, Meta, Grok, and DeepSeek were tested.
Grok had 58% problematic responses, ChatGPT 52%, Meta 50%.
Chatbots hallucinate due to biased data and user bias.
They lack clinical judgment and aren't licensed for health advice.
Previous studies found 32% of citations were accurate.
Experts call for oversight to prevent misinformation.
200M people weekly use ChatGPT for health questions.
Developers must reevaluate chatbot health communication.
3 hours ago
Copyright © 2026 Minimalist News. All Rights Reserved.