Leading AI Chatbots Like Copilot, ChatGPT, And Gemini Provide Misleading And Fake News Summary; Study Reveals

The British Broadcasting Corporation (BBC) carried out a study on the accuracy of four prominent AI chatbots, namely, Perplexity, Google’s Gemini, Microsoft’s Copilot, and OpenAI’s ChatGPT.
The research commenced in December 2024. These AI assistants were posed 100 questions about news and prompted to use the BBC. For the purpose of the test, the news broadcasting media lifted the AI restriction to avail access to its content. BBC journalists reviewed the final answers of these AI chatbots regarding the news, and boy, the results are not flattering for these companies.
Perplexity showed the most issues with accuracy, followed by Google’s Gemini. ChatGPT and Microsoft Copilot were also major disappointment. To cite an example of inaccuracy in the news summary, Gemini incorrectly quoted NHS to avoid vaping and look for alternatives to stop smoking. However, the quoted BBC article stated the NHS claiming vaping as a method to quit smoking. The discrepancies crept into all kinds of data, including dates.
See Also: Google’s Gemini AI Tells User To Die With Humiliating Passage; Internet Is Freaked Out
The AI chatbots on occasions altered the original quotes or just came up with made-up ones. Some times the AI chatbots took info from microsites or graphics designed to serve breaking news and not the actual source articles. Some of these are outdated ones, and hence the summaries generated did not reflect the real-time data or context. The BBC report states
The AI assistants we tested often finish their responses with short, one- or two-sentence conclusions. While other parts of the response are usually accompanied with citations, these summary statements are rarely attributed to anyone. Unfortunately, these generated conclusions can be misleading or partisan on sensitive and serious topics.
And
Missing context was one of the most common issues identified by reviewers.
Read the paper here.
(all images from the BBC report)
Deborah Turness – AI Distortion is new threat to trusted information https://t.co/Hst8sHFd88
— BBC News Press Team (@BBCNewsPR) February 11, 2025
BBC finds significant inaccuracies in over 30% of AI-produced news summaries https://t.co/f1XATmlMvx
— Ars Technica (@arstechnica) February 13, 2025
AI chatbots are distorting news stories, BBC finds https://t.co/xGwkvAI1Gf
— The Verge (@verge) February 11, 2025
Report: AI Chatbots Unable to Accurately Summarise News, BBC Finds https://t.co/x1q9KgxA6m #AI #GPT #LLMs pic.twitter.com/k5JVqwtGmL
— Library Journal (@LibraryJournal) February 11, 2025
See Also: All The Buzz On Artificial Intelligence
Cover: Grok