A 45-year-old man in New Delhi was recently hospitalised in a critical condition after self-administering HIV post-exposure prophylaxis (PEP) medication based on advice from an AI chatbot. The man developed Stevens-Johnson Syndrome, a severe, potentially fatal drug reaction characterised by painful rashes, blistering, and peeling skin. The man had purchased a full 28-day course of the medication over the counter and took it for seven days following a high-risk sexual encounter. He was later treated at the Dr. Ram Manohar Lohia Hospital.
Doctors here warned that while an AI chatbot can offer general health information, it cannot assess individual medical history, diagnose conditions, or prescribe medication.
A case such as this serves as a caution against using AI for medical diagnosis and treatment decisions. Taking medication without a prescription and supervision can lead to severe side effects, toxicity, and development of drug resistance, doctors said, adding that HIV prevention medicines, including PrEP or PEP, must be taken strictly under medical supervision, starting within 72 hours of exposure after proper testing.
The problem exists in the era of smartphones and the Internet, Jitender Nagpal, deputy medical director, Sitaram Bhartia Institute of Science and Research, said. People often gather information on their own to understand an illness or, in some cases, try to treat themselves with medication.
“But now, ChatGPT has increased this problem. People have started seeking its help even in complicated cases, which can be quite dangerous. In the case of over-the-counter medication, the risk may not be very high, but in serious cases, it leads to delays and can even be life-threatening. We sometimes come across patients who start considering ChatGPT as their first doctor,’’ Dr. Nagpal said.
Patients often approach doctors when their condition worsens significantly, doctors say.
“What patients don’t understand is that ChatGPT responds according to the prompt given to it. It is not necessary that one can understand the seriousness of a patient’s illness. It does not know the age, gender, or previous medical history of the patient. It does not take a detailed medical history by asking follow-up questions; it simply provides answers. So, how can accurate information be ensured? It cannot physically examine the patient. For example, if a patient describes certain symptoms, is the body showing signs that point in that direction or not? In case of doubt, it cannot order tests. Whereas a doctor, if they have any doubt, discusses the situation with the patient, shares his/her impression, advises investigations so that the disease can be properly diagnosed and treated correctly,’’ Dr. Nagpal said.
The misuse of AI is not limited to patients using it for self-diagnosis and prescription, with the World Health Organization (WHO) previously calling for caution to be exercised in using AI generated large language model (LLM) tools to protect human well-being, safety, and autonomy, and preserve public health.
Data used to train AI may be biased, generate misleading or inaccurate information that could pose risks to health, equity and inclusiveness, and LLMs generate responses that can appear authoritative and plausible to an end user, the WHO warned. However, these responses may be completely incorrect or contain serious errors, especially for health-related responses.
LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data, including health data, that a user provides to an application to generate a response, the WHO said.
LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content, the WHO said. While committed to harnessing new technologies, including AI and digital health to improve human health, the WHO recommends policymakers ensure patient safety and protection while technology firms work to commercialise LLMs.
1 hour ago
7






English (US) ·