Be Wary of False, AI-Generated Medical Information
Despite the usefulness of AI chatbots like ChatGPT as journalistic tools, it’s crucial to avoid accepting the information they provide at face value. Two studies presented at the December conference of the American Society of Health-System Pharmacists in Anaheim, California, highlighted the ongoing limitations of chatbots.
**Study 1: Accuracy of ChatGPT’s Drug Information**
* **Objective:** Researchers and pharmacists from Iwate Medical University in Japan and Torrance Memorial Medical Center in California contrasted data from ChatGPT with information from Lexicomp, an evidence-based clinical reference, for 30 medications.
* **Results:**
* ChatGPT correctly answered just two out of the 30 drug inquiries.
* Information was incorrect for 26 drugs, and partially inaccurate for the remaining two.
* While ChatGPT may improve as a learning system, consulting a pharmacist or doctor for drug-related queries remains essential.
**Study 2: ChatGPT’s Handling of Pharmacist Inquiries**
* **Objective:** Researchers from Long Island University College of Pharmacy in New York evaluated ChatGPT’s responses to 39 queries posed by pharmacists to a drug information service.
* **Results:**
* ChatGPT failed to provide any response, or gave an inaccurate or incomplete response, for 74% of the questions.
* ChatGPT frequently fabricated references, providing URLs that led to non-existent studies.
* In one instance, ChatGPT incorrectly stated that there was no drug interaction between Paxlovid, an antiviral used for COVID-19 treatment, and Verelan, a blood pressure medication. However, these medications can interact, potentially causing excessive blood pressure reduction.
* ChatGPT incorrectly advised on the conversion of a muscle spasm medication from injectable to oral form, citing guidance from non-existent organizations. The recommended dosage was off by a factor of 1,000, potentially leading to an incorrect prescription.
**Implications for Journalists**
* Medical study journalists should continue their due diligence by examining studies for conflicts of interest and seeking comments from independent researchers.
* Follow news reports related to AI-generated medical information and guidance from medical societies and regulators.
* Ask researchers about their study data sources and validation methods.
* Verify any information obtained from ChatGPT or other AI tools with independent sources or peer-reviewed references before publishing.
**Additional Resources:**
* [ChatGPT Flubbed Drug Information Questions](https://www.medpagetoday.com/meetingcoverage/ashp/107716?xid=nl_medpageexclusive_2023-12-11&eun=g970669d0r&utm_source=Sailthru&utm_medium=email&utm_campaign=MPTExclusives_121123&utm_term=NL_Gen_Int_Medpage_Exclusives_Active) – MedPage Today article
* [ChatGPT Not Ready for Prime Time for Medication Queries](https://www.pharmacypracticenews.com/Online-First/Article/12-23/ChatGPT-Not-Ready-for-Prime-Time-for-Medication-Queries/72307) – Pharmacy Practice News article
* [ChatGPT generates fake data set to support scientific hypothesis](https://www-nature-com.proxy1.library.jhu.edu/articles/d41586-023-03635-w#ref-CR1) – Nature article
* [Generative AI tools like ChatGPT and their potential in health care: A primer for journalists](https://healthjournalism.org/blog/2023/03/generative-ai-tools-like-chatgpt-and-their-potential-in-health-care-a-primer-for-journalists/) – AHCJ blog post from March 2023