Researchers at Long Island University conducted a study to assess ChatGPT’s ability to answer medication-related questions.
They presented 39 real queries from the university’s College of Pharmacy drug information service to the artificial intelligence chatbot and compared its responses with those crafted by trained pharmacists.
The study, presented at the American Society for Health-Systems Pharmacists meeting, discovered that ChatGPT accurately addressed only about 10 of the questions, roughly a quarter of the total.
For the remaining 29 queries, the responses were either incomplete, inaccurate, or failed to address the questions.
Despite becoming the fastest-growing consumer application with almost 100 million registrations within two months of its November 2022 release, ChatGPT’s accuracy in addressing health and medication-related queries raised concerns.
Sara Grossman, an associate professor of pharmacy practice at Long Island University, expressed worry that the chatbot’s popularity might lead people to seek health information from it, potentially resulting in harmful consequences.
One example highlighted the dangers of relying on ChatGPT’s advice.
When asked about a potential interaction between the Covid-19 antiviral medication Paxlovid and the blood-pressure lowering medication verapamil, ChatGPT provided inaccurate information, potentially putting individuals at risk.
The study also found that when researchers requested scientific references for ChatGPT’s responses, the chatbot could only provide them for eight out of 39 questions.
Additionally, it was discovered that the software fabricated references, giving the appearance of legitimacy but leading to fictional citations.
Grossman emphasized the potential risks of relying on ChatGPT’s guidance, citing instances where the chatbot provided inaccurate dose conversion ratios for medications.
Such errors, if followed by healthcare professionals, could have serious consequences for patient care.
Despite the spokesperson for OpenAI, the organization behind ChatGPT, advising users not to rely on the chatbot for medical information, Grossman expressed concern that people might use it as a quick source of information, similar to searching for medical advice on search engines.
Grossman recommended that individuals seeking medical information online turn to reputable sources such as governmental websites like the National Institutes of Health’s MedlinePlus page.
However, she stressed that online answers should not replace the advice of healthcare professionals, as each patient’s case is unique, and personalized guidance is crucial.
Controversy Surrounds Videos Of Gaza Detainees ‘Surrendering Guns
ChatGPT Falls Short In Medical Queries: Study Raises Concerns, ChatGPT Falls Short In Medical Queries: Study Raises Concerns