Skip to content

New study shows ChatGPT is working to answer medical questions

New study shows ChatGPT is working to answer medical questions

New study shows ChatGPT is working to answer medical questions
New study shows ChatGPT is working to answer medical questions

ChatGPT's Performance in Medical Queries Sparks Concerns

Researchers from Long Island University recently put ChatGPT to the test, posing 39 drug-related questions to the AI chatbot. The questions were genuine inquiries from the School of Pharmacy Drug Information Service at the university. The responses were then compared to those written and reviewed by trained pharmacists.

The results were concerning: ChatGPT provided accurate responses to only about 25% of the questions. The remaining 75% of answers were either incomplete, incorrect, or failed to address the issue at hand. These findings have raised eyebrows among researchers and medical professionals, as many people turn to ChatGPT for quick health information.

ChatGPT's Popularity and Rapid Growth Overlooked Precautions

Despite its popularity and rapid growth, ChatGPT's accuracy in providing medical information has been a topic of ongoing study, with mixed results.

In one instance, researchers asked ChatGPT if taking the antiviral Covid-19 medication Paxlovid and the blood pressure medication Verapamil together would cause any side effects. ChatGPT responded that there would be no side effects. However, this is not the case, as it can lead to a significant drop in blood pressure that may cause dizziness and fainting.

This highlights the potential dangers of relying on ChatGPT for health-related information, especially as its popularity continues to soar. The researchers noted that they fear students, pharmacists, and general consumers will rely on resources like ChatGPT to obtain health information.

The Challenge of Ensuring Reliability in AI-Driven Health Information

When researchers asked ChatGPT to provide scientific references to back up each of its answers, they discovered that the AI was unable to do so for a majority of their questions. When ChatGPT did provide references, the researchers found that they were often fabricated.

The researchers also found that ChatGPT was inaccurate when providing conversions for medication dosages. In one instance, they asked ChatGPT how to convert a spinal dose of muscle relaxant Baclofen to an appropriate oral dose. Despite not being able to find a scientifically-determined conversion rate, ChatGPT supplied a single conversion rate and cited guidelines from two medical organizations.

However, neither organization had officially released guidelines on dose conversions. The conversion factors provided by ChatGPT had not been scientifically proven, and the software made a critical error in converting oral doses by swapping units, resulting in dosage recommendations that were up to 1,000 times lower than necessary.

The Risks of Reliance on AI for Medical Information

If medical professionals followed the advice provided by ChatGPT, they could risk administering oral Baclofen doses that are 1,000 times lower than necessary, potentially leading to withdrawal symptoms such as hallucinations and seizures.

The study from Long Island University is not the first to raise concerns about the potential dangers of relying on AI like ChatGPT for medical information. Previous studies have shown that AI can fabricate scientific references when answering medical questions and even falsify author names on previously published articles in scientific journals.

Professor Sara Grossman, one of the study's authors, was surprised by ChatGPT's ability to generate near-instant information that may have taken trained experts hours to compile. According to Grossman, the responses were "very professional and raffishly formulated," which may strengthen users' trust in the tool's accuracy, even if they are not medical experts or trained in critical thinking.

OpenAI, the organization that developed ChatGPT, recommends that users not rely on the tool as a replacement for professional medical advice or treatment. ChatGPT is not designed to provide medical information, and its models should not be used for diagnosing or treating serious medical conditions.

Conclusion: Caution is Required when Relying on AI for Medical Information

While AI tools like ChatGPT show promise in certain medical applications, their accuracy is not universally high. Human oversight is required to ensure reliability, especially in complex or nuanced medical scenarios. Consumers should be cautious when relying on AI for medical information and seek guidance from qualified medical professionals. Reddit Quote:

"AI is a tool to help people, not replace them." - u/

Sources:

  • [1]
  • [2]
  • [3]
  • [4]

Latest