Earlier this week, deceptive YouTube ads showcased fake endorsements by Luxembourg's Prime Minister and Foreign Minister for odd investment projects.

In response to enquiries, our colleagues from RTL.lu learned from the Ministry of State on Tuesday that preliminary actions have been taken.

What happened?

On Monday, an RTL user sent our colleagues at RTL.lu a screenshot of a YouTube ad featuring Luc Frieden and a video featuring Xavier Bettel seemingly speaking in German.

In the video featuring Frieden, users can hear a person speaking in German, but it is fairly apparent that something is off.

The video showing Xavier Bettel is different. It sounds as if Bettel is saying something in German, but the sentences attributed to him are entirely fabricated. In addition, there is a clear mismatch between the spoken words and the movement of his lips.

Why is this problematic?

While most people will swiftly identify these videos as fake, these instances underscore the current capabilities of technology and, more importantly, the potential future implications of artificial intelligence.

Although the Luxembourgish language currently acts as a deterrent for such scams, this will not hold true in the future, making it increasingly challenging for users to differentiate between reality and fiction.

How does voice imitation work?

The replication of a voice involves the use of AI. Essentially, a programme is fed audio files from the individual who is to be imitated. Through text-to-speech, the programme can manipulate these files to make it seem as if the person is saying sentences or statements they have never said. Notably, Microsoft's Vall-E is a programme that only requires a few sentences to effectively imitate someone. The efficiency of such imitation improves as the AI is exposed to more audio data for training.



What is YouTube's response to these ads?

YouTube has been part of Google since 2006. A Google spokesperson informed our colleagues from RTL.lu that such content breaches its Terms of Service and comprehensive measures are in place to combat disinformation. Automated tools and human reviewers work in tandem to detect and remove content reported as deceptive.

During the second quarter of 2023, YouTube reportedly deleted 78,000 videos due to false information. An additional 963,000 videos were removed for spam or because they were specifically made to scam viewers.