Search
Tech

Oxford study warns against using AI chatbots for medical advice

News Desk

Feb 11

Using Artificial Intelligence (AI) chatbots to seek medical advice can expose patients to risks, a new study has found.

 

The research concluded that AI tools used for medical decision-making frequently produce inaccurate and inconsistent information, which can lead to incorrect diagnoses and inappropriate advice. The study was conducted by researchers from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford and was published in the journal Nature Medicine.

 

The findings raise concerns as large language model-based chatbots are increasingly used by people seeking guidance on symptoms and possible health conditions.

 

Dr Rebecca Payne, a general practitioner and co-author of the study, said that the research showed that AI systems are not ready to function as a substitute for medical professionals. “Despite all the hype, AI just isn’t ready to take on the role of the physician,” she said.

 

She cautioned that people who rely on AI-generated medical responses may be put at risk. “Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed,” Dr Payne added.

 


As part of the study, researchers asked nearly 1,300 participants to assess a series of health-related scenarios. Participants were required to identify possible conditions and decide what action should be taken.

 

Some participants used AI tools powered by large language models to receive suggested diagnoses and guidance on next steps, while others relied on traditional methods, including consulting a GP.

 

Researchers evaluated the responses and found that AI-generated advice often combined correct information with incorrect or misleading guidance. The study noted that many users struggled to determine which parts of the information provided by the AI could be trusted.

 

Although the study found that AI chatbots perform well in standardised medical knowledge tests, researchers said this did not translate into safe or reliable use in real-world health situations. The research warned that using AI systems to assess personal medical symptoms could place users at risk.

 

“These findings highlight the difficulty of building AI systems that can genuinely support people in sensitive, high-stakes areas like health,” Dr Payne said.

 

The study’s lead author, Andrew Bean of the Oxford Internet Institute said that the results demonstrated ongoing limitations in how AI systems interact with people.

Related

Comments

0

Want the news to finally make sense?

Get The Current Tea Newsletter.
Smart updates, daily predictions, and the best recs. Five minutes, free.


Read more