A major EBU study warns that popular AI chatbots often deliver unreliable news, with 45% of answers inaccurate or poorly sourced and one in five outright false.

AI is increasingly used to access news and current affairs, but a major study by the European Broadcasting Union (EBU), which brings together 22 public broadcasters from 18 countries – including the BBC and Deutsche Welle – warns that popular chatbots like ChatGPT, Copilot, and Gemini often deliver inaccurate or misleading information without clear sources. The researchers tested four AI assistants by feeding them 30 real user questions on topics such as international affairs and politics.

The results were stark: 45% of the answers were found to be inaccurate, misleading, or poorly sourced. In 20% of the cases, the responses contained entirely false information. Even when sources were provided, 31% of answers cited incorrect or irrelevant references.

Speaking to RTL, Dorien Verckist of the EBU explained that the systems really struggle when it comes to fast-moving topics such as political developments, conflicts, or natural disasters. She noted that the study included questions like "Is Trump starting a trade war?" or "How many people have died in the latest earthquake in Iran?", questions where factual accuracy depends on the latest updates. In such cases, AI assistants often struggled to provide reliable answers, Verckist said.

Tonal certainty despite fake news

One particularly striking example involved the question "Who is the current Pope?". Verckist explained that one chatbot acknowledged that the previous Pope had died, yet still identified him as the current one. According to her, this highlighted a core issue: many systems mix outdated data with newer information in a way that produces confusing or plainly wrong results.

Another major concern is the tone of the AI responses. Despite being wrong, the chatbots often respond in a tone of absolute confidence. Verckist warned that they express no uncertainty, noting that this makes it easier for users, especially those with little experience of AI or journalism, to accept the answers as fact.

The study also found these issues to be widespread across languages and platforms, indicating systemic flaws rather than isolated errors. In many cases, outdated information was presented as current, or facts from different sources were improperly blended.

The EBU sees this as a threat to public trust in the media: if false information is associated with the name of a broadcaster, it could cause lasting damage to the reputation of reliable journalistic sources.

Online news increasingly consumed through chatbots

A separate report by the Reuters Institute for the Study of Journalism reinforces the urgency: around 7% of online news is now consumed through chatbots like ChatGPT, Copilot, or Gemini. Among people under 25, that figure rises to 15%.

Media law expert Marc Cole from the University of Luxembourg raised further concerns from a legal standpoint. He argued that AI-generated news summaries do not constitute journalism in the traditional sense, explaining that these summaries do not produce original content, but are rather statistical reproductions based on existing journalistic texts.

This opens up complex copyright questions. Can AI systems legally use media content without explicit consent? Cole stated that the legal framework in Europe is still unclear. While a "text and data mining" exception exists that may allow the use of certain sources for AI training, it is not evident whether this also applies to content summarised and presented by chatbots, according to Cole.

For that reason, Cole is calling for more transparency and regulation, noting that the most important step would be requiring AI systems to clearly indicate the sources of their information. He added that users need to see which sources the system relies on. Otherwise, he said, there would be no way to verify the accuracy of the content.

He also warned against the opposite extreme: media outlets fully blocking their content from being accessed by AI systems. For him, such a step would be just as problematic, as quality content would not be available to train or inform these models, leading them to produce even more nonsense, along the lines of "garbage in, garbage out".

The EBU echoes this concern in its ongoing campaign 'Facts In – Facts Out', which calls for greater responsibility from AI developers, more transparency in how information is sourced, and open dialogue with media organisations to ensure public trust is not eroded.

Video report in Luxembourgish