
Gérard Scheiwen, director of the Luxembourg Organisation for Reproduction Rights (Luxorr), highlighted a growing disconnect between social media consumption and copyright awareness during a Wednesday interview with our colleagues from RTL Radio.
“Today’s social media culture creates the illusion that content is free to copy and share - but that’s frequently not the case,” Scheiwen stated. He noted that many users unknowingly violate copyright laws daily, comparing the issue to “taking a neighbour’s car without permission just because the keys are visible.”
Copyright law distinguishes between primary rights, which cover consuming purchased content like books or newspapers, and secondary rights that require additional permissions for reproduction. This system is managed by collective rights societies including Luxorr for print media and photography, Sacem for music, and Algoa for audiovisual works.
The situation becomes more complex with private copying, which exists in a legislative grey area. Luxembourg’s 2001 copyright law permits personal-use reproductions provided authors receive fair compensation. According to Scheiwen, the fundamental issue lies in Luxembourg’s lack of implementing regulations. “Without Grand Ducal regulations establishing enforcement mechanisms, no one can actually be prosecuted for violations,” the Luxorr director explained.
The rapid development of artificial intelligence has introduced new copyright complications. While the EU’s 2024 AI Act establishes guidelines for “responsible” practices by AI providers, Scheiwen argues its provisions remain insufficient. The legislation mandates respect for copyright at national and international levels, but its requirement for “reasonable effort” to prove compliance is criticised as overly vague.
Major tech companies like Google, Microsoft, and OpenAI face particular scrutiny. “These providers should acknowledge they’ve built their business models using decades of internet-scraped content,” Scheiwen stated, describing the issue as particularly “sensitive”. The AI Act does call for transparency about training data sources, but enforcement mechanisms remain weak.
Scheiwen emphasised the need to protect creators’ livelihoods: “This is about ensuring artists and authors can earn a living from their work.” He questioned government appeals for “balance” in transparency requirements, asking whether the scales tip toward rights holders or AI corporations.
The solution, according to Luxorr, requires dual action: public education about content ownership, and stronger legislation to guarantee fair compensation through collecting societies.