Millions of people across the globe are now turning to the artificial intelligence chatbot ChatGPT for medical advice, a trend that leading UK experts have labelled as potentially 'dangerous for health'.
According to data from OpenAI, a staggering 40 million users now seek daily health guidance from the platform. This represents a significant shift in how people access medical information, with more than 5 per cent of all global interactions on ChatGPT now being health-related.
A Digital First Port of Call
With the platform boasting over 800 million weekly users, it has become a primary resource for millions. People are using the AI to check symptoms, understand complex medical terminology, and navigate healthcare systems, particularly in nations with private healthcare where administrative hurdles are common.
The data reveals a telling pattern about when people seek this digital help. Seven out of ten health queries are made outside of standard clinic hours, with usage peaking when GP surgeries are closed and hospital A&E departments have their longest wait times.
Experts Sound the Alarm on 'Pattern-Matching' Risks
This rapid adoption of AI for medical diagnosis has raised serious concerns among professionals. Colette Mason, an Author and AI Consultant at London-based Clever Clogs AI, issued a stark warning in an interview with Newspage.
"We've already watched this horror film with mental health and now we're queuing up for the sequel," Mason stated. She pointed to the mixed outcomes seen with AI in mental health support, which included some benefits but also led to problems like emotional dependency, reinforced delusions, and crisis situations.
"Physical health is heading down exactly the same path and it could be dangerous," she continued. "OpenAI celebrates 40 million daily users seeking health guidance but forgets what happens when pattern-matching meets real medical emergencies."
Mason expressed frustration that lessons from previous mental health 'wake-up calls' appear to have been ignored. "We had a chance to get this right... Instead, we're doing it again, faster, and with your mum's stroke symptoms instead of your mate's anxiety spiral."
A Tool for the Informed, A 'Curse' for the Vulnerable
The AI consultant did acknowledge a potential benefit for a specific group of users. "It is a fantastic tool for power users who are doing their own informed research," she conceded.
However, she emphasised the disproportionate risk for others, concluding starkly: "For the vulnerable, it could be a curse." This highlights a critical divide; while those with medical literacy may use AI to supplement research, those without could be misled by its responses, which are based on pattern recognition rather than clinical judgement.
The trend underscores a growing reliance on digital tools to fill gaps in accessible healthcare, but experts insist that AI chatbots are no substitute for professional medical diagnosis, especially in emergencies.