New Research Suggests Caution with ChatGPT for Medical Advice
A recent study conducted by researchers at Long Island University tested ChatGPT's accuracy in answering drug-related queries. They posed 39 questions, all genuine inquiries from the School of Pharmacy Drug Information Service of the university, to the AI-powered chatbot.
The study revealed that ChatGPT provided accurate answers to about 25% of the questions, falling short on the remaining 75%. The responses for these questions were either incomplete, inaccurate, or failed to address the original problem.
These findings have raised concerns among medical professionals about the use of ChatGPT for obtaining health information. The widespread popularity and rapid growth of ChatGPT have caused worry that students, pharmacists, and general consumers may rely on the AI for health-related advice.
Sara Grossman, an associate professor of Pharmacy Practice at Long Island, expressed concern, stating, "Given the popularity of ChatGPT, we fear that our students, other pharmacists, and even general consumers will rely on such resources to obtain information about their health and well-being."
The research focused on pharmaceutical considerations and highlighted several instances where ChatGPT's responses led to incorrect or harmful counteractions. For example, when researchers asked if taking the antiviral Covid-19 medication Paxlovid and the blood pressure medication Verapamil simultaneously would cause side effects, ChatGPT reported there would be no adverse effects. However, this is not the case, as it may indeed lead to a significant drop in blood pressure that could result in dizziness or fainting.
In addition, the study found that ChatGPT failed to produce valid scientific references to back up its answers in 21 of the 39 questions posed to it. Further investigation revealed that the supposedly legitimate citations provided by ChatGPT were actually fabricated.
"The use of ChatGPT to solve this problem may put patients at risk of unnecessary, avoidable drug interactions," writes Grossman in an email to CNN.
The researchers also discovered that ChatGPT suggested incorrect dose conversion factors for the muscle relaxant Baclofen, putting patients at risk of overdose should medical professionals follow its guidance.
"This reaction is riddled with errors and problems that could have far-reaching implications for patient care," Grossman adds.
Previously published studies have also raised concerns over ChatGPT’s ability to falsify scientific references when addressing medical queries and even misrepresent the authors of previously published articles in medical journals.
Despite only rarely using ChatGPT before conducting the research, Grossman was surprised by its ability to synthesize information almost instantly, a task that trained professionals would require hours to accomplish.
"The responses were very professional and sophisticatedly formulated, which may contribute to strengthening the illusion of its accuracy," Grossman said. "Users, consumers, or others who may not be able to distinguish fact from fiction can be influenced by its perceived authority."
An OpenAI spokesperson encouraged users not to rely solely on ChatGPT for professional medical advice or treatment. The spokesperson also suggested that users should consult their healthcare providers for accurate and reliable information.
In conclusion, ChatGPT can be a useful tool in medical education and structured tasks, but its use in direct patient care should be approached with caution. Recognizing its limitations and biases is crucial to avoiding potential dangers.
Sources
Additional Resources
