Harvard Experts Weigh in on Seeking Medical Advice from ChatGPT

In the late 2000s, physicians observed that patients started arriving at appointments with questionable medical information sourced from the internet, says internist and AI researcher Adam Rodman. According to Rodman, around 68 percent of adults have previously used search engines for medical advice. However, AI chatbots are now consulted by about 32 percent of adults, which is roughly half of those seeking online health information.

Rodman views these online resources as beneficial when used correctly. As an assistant professor at Harvard Medical School and a physician at Beth Israel Deaconess Medical Center, he provides guidance on utilizing AI effectively through op-eds and online courses. In an edited interview, Rodman introduces a stoplight system to determine when it is appropriate to consult a chatbot and when to seek a doctor’s opinion.

In the early 2000s, the term “internet-informed patient” described individuals who brought online articles to medical appointments. This behavior was initially limited to tech-savvy patients. By the late 2000s, search engines, powered by neural networks, began offering more relevant health data, leading to the term “Dr. Google.” This shift saw patients arriving with potentially unfounded confidence in their health knowledge.

The concept of “cyberchondria,” similar to hypochondria, emerged as search engines could push users toward extreme health conclusions. This phenomenon highlights the risks of recommendation algorithms that prioritize engagement, sometimes at the expense of accuracy.

Incorporating AI into health inquiries adds complexity. While AI language models (LLMs) operate similarly to search engines by providing information users may subconsciously seek, they differ by creating a perceived relationship with users due to their authoritative tone. This could potentially exacerbate cyberchondria.

AI and search companies are increasingly aware of their tools’ use in health contexts and are implementing safety measures. Bots often advise users to contact healthcare professionals. Theoretically, language models, especially advanced reasoning ones, are superior to search engines in diagnosing medical conditions.

A study led by Andrew Bean earlier this year demonstrated that LLMs excel in identifying medical issues independently but struggle during interactions with users. This suggests that the clarity of user queries significantly impacts the effectiveness of LLMs.

Rodman categorizes health queries into a stoplight system. Green-light queries, such as dietary advice for diabetes or understanding medication side effects, are generally safe. Yellow-light questions, like clarifying a doctor’s visit or understanding test results, should involve a doctor’s input. Red-light queries, such as managing conditions or questioning prescriptions, require professional medical advice.

Sharing health information with AI is not inherently riskier than using a search engine. However, major tech companies are developing tools for direct input of medical data into AI systems. Research indicates people disclose more to LLMs than to search engines, posing a larger security concern in practice.

Original Source: news.harvard.edu

Leave a Reply

Your email address will not be published. Required fields are marked *