British households are being cautioned against relying on artificial intelligence chatbots for crucial financial decisions after a major investigation uncovered a worrying pattern of inaccurate and misleading advice.
Key Findings from the Investigation
Consumer champion Which? conducted rigorous testing of popular AI tools including Microsoft's Copilot, ChatGPT, and Meta's AI, presenting them with 40 different financial and legal questions. The results revealed significant inaccuracies that could prove costly for consumers.
Researchers discovered that these AI systems advised breaking HMRC investment limits on Individual Savings Accounts (ISAs), potentially leading to tax penalties for unsuspecting users. In another concerning example, ChatGPT incorrectly stated that travel insurance was mandatory for visiting most EU countries, which is factually wrong and could pressure people into unnecessary purchases.
Meta's AI performed particularly poorly, providing incorrect information about how to claim compensation for delayed flights. The testing assessed answers based on accuracy, relevance, clarity, usefulness and ethical responsibility.
Real-World Consequences for Consumers
The potential harm extends beyond theoretical scenarios. A 65-year-old user shared their experience, stating: "It just gave me all the wrong information. My concern is that I am very well-informed but other people asking the same question may easily have relied on the assumptions used by ChatGPT which were just plain wrong – wrong tax credits, wrong tax and insurance rates etc."
Which? described the findings as uncovering "far too many inaccuracies and misleading statements for comfort", especially concerning given that people are increasingly turning to AI for important financial or legal queries.
Regulatory Warnings and Industry Response
The Financial Conduct Authority issued a stark warning, emphasising that unlike regulated advice from authorised firms, AI-generated guidance isn't covered by the Financial Ombudsman Service or the Financial Services Compensation Scheme. This leaves consumers with little protection if they suffer financial loss due to following incorrect AI advice.
When confronted with the findings, Microsoft encouraged users to verify the accuracy of content generated by their AI systems and stated their commitment to improving their technologies based on feedback.
OpenAI acknowledged that improving accuracy remains a challenge for the entire industry, though they highlighted that their latest default model, GPT-5, represents their "smartest and most accurate" system to date.
The research, which included a survey of 4,000 adults about their AI usage, serves as a crucial reminder that while AI tools offer convenience, they should not be trusted blindly for matters involving personal finances or legal obligations.