Study on ChatGPT to Obtain Medical Advice Finds Inaccuracy in the Responses

ChatGPT for Medicine

OpenAI’s ChatGPT, an artificial intelligence system that generates responses based on internet data, faced scrutiny in a study by pharmacists who found that nearly three-fourths of its responses to drug-related questions were either incomplete or inaccurate.

The research, conducted by the American Society of Health-System Pharmacists (ASHP) and presented at the ASHP’s Midyear Clinical Meeting, involved posing real-world questions to ChatGPT that were previously answered by pharmacists from Long Island University’s College of Pharmacy drug information service between 2022 and 2023.

Out of 45 initial questions, 39 were used for evaluation after excluding six due to insufficient literature for evidence-based responses.

Pharmacists reviewed and established correct answers for these questions, serving as a benchmark against which ChatGPT’s responses were evaluated.

The findings revealed that ChatGPT provided satisfactory answers to only 10 of the 39 questions.

Among the remaining 29 questions, there were instances where ChatGPT’s responses did not directly address the question (11 cases), provided inaccurate information (10 cases), or were incomplete (12 cases).

ChatGPT AI Searchbot (Photo: Getty Images)

Furthermore, ChatGPT only provided references in eight of its answers, all of which were found to be non-existent according to the study.

Sara Grossman, PharmD, lead author of the study and associate professor of pharmacy practice at Long Island University, cautioned, “Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information.”

She emphasized the importance of verifying information from reliable sources when using AI tools like ChatGPT.

The study highlighted a specific case where ChatGPT incorrectly indicated no risk of drug interaction between the COVID-19 antiviral Paxlovid and verapamil, a blood pressure medication.

In reality, such a combination could lead to severe blood pressure lowering, underscoring the potential risks of relying solely on AI-generated information.

Gina Luchen, PharmD, ASHP director of digital health and data, noted that while AI tools have potential benefits in healthcare settings, including pharmacy, it is crucial for pharmacists to carefully evaluate their use and educate patients on trustworthy sources of medication information.

In response to the findings, a spokesperson for OpenAI advised caution, stating, “We guide the model to inform users that they should not rely on its responses as a substitute for professional medical advice or traditional care.”

OpenAI’s usage policies clearly state that their models are not tailored for medical advice, including diagnosis or treatment of serious medical conditions.

Published
Categorized as Health
Sophia Anderson

By Sophia Anderson

Sophia Anderson is an accomplished writer specializing in health and wellness. Sophia's writing covers a broad range of topics, including nutrition, mental health, fitness, and preventative care. She is known for her thorough research, attention to detail, and ability to connect with her audience through relatable and insightful content.

Leave a comment

Your email address will not be published. Required fields are marked *