In conversation with RTL, Michèle Finck, Professor of Law and Artificial Intelligence at the University of Tübingen, argues that AI must be regulated through clear, harmonised rules, highlighting the strengths and weaknesses of the EU's AI Act.

Finck considers AI to be potentially the most significant technological development of this era. She explained that during her legal studies, she gradually became fascinated by the question of how the law reacts to change, whether social or technological. This line of interest eventually led her to the field of artificial intelligence, which she describes as an area where the law must not only respond but is itself being transformed.

She highlighted three core questions that guide her work: what impact the technology has, what its future shape might be, and whether the legal framework needs to be adapted accordingly. According to Finck, the law acts as a tool through which society sets its rules and gives itself structure, and those rules are just as necessary when dealing with new technologies.

Finck emphasised the dual-use nature of AI. She likened AI to a tool such as a knife, which can be put to positive use but can also cause harm. She mentioned that colleagues at her university are employing AI to detect cancer earlier or to advance research on the human brain. At the same time, she warned that similar systems can be used to facilitate the design of chemical weapons.

Finck also stressed that regulatory systems must find a balance that preserves innovation while managing risks. She noted that this balancing act has always been part of technological regulation, but AI brings the challenge into much sharper focus.

AI Act: harmonisation rather than restriction

The EU's AI Act, in force since August 2024, is intended to create common standards across the European market rather than limit AI development. Finck explained that the core idea behind the legislation is to establish as much harmonisation and consistency as possible within the EU, strengthening the internal market by giving companies a clearer and more predictable legal framework.

She characterised the AI Act as a large and complex package of rules that functions like a legislative umbrella: beneath it are multiple chapters with very different purposes, addressing a diverse range of AI systems and actors, she said. In her view, it is best understood as a collection of norms that apply to different actors and technologies.

For example, the third chapter of the legal text covers so-called high-risk AI systems. Finck mentioned the example of medical devices such as pacemakers. She noted that this category comes with strict requirements, including rules on data quality and cybersecurity, to ensure that these systems function safely and reliably.

AI Act and data protection: separate but simultaneous

Finck stressed that the AI Act should not be confused with the General Data Protection Regulation (GDPR) of the EU. She said that although data plays a role in training AI systems, most provisions of the Act target the functioning of the systems themselves. She added that in many cases, both legal frameworks will apply at the same time and must be interpreted together.

She believes it is too early to judge how the law will affect practice, simply because not enough evidence exists yet. Finck explained that the rules of the AI Act are not strict and therefore unlikely to force companies to fundamentally change their business models. She also pointed out that, despite this moderate approach, the AI Act includes substantial penalties modelled on the GDPR, calculated either as a percentage of annual turnover or as high fixed fines.

Finck also expects significant differences between EU member states in the early stages of implementation. She explained that each state must designate national bodies responsible for enforcing the AI Act, and that these bodies must have adequate funding and appropriately qualified staff. These conditions, she said, already lead to stark differences among EU member states.

A complex and imperfect piece of legislation

According to Finck, one of the biggest challenges is the drafting quality of the AI Act. She considers it long, overly complex, and at times poorly written, which leads to uncertainty about how to interpret certain obligations. She noted that several articles incorrectly refer to other provisions and that some terms are defined inconsistently. For her, this lack of clarity is the central weakness of the law, although she expects the EU to address some of these issues through the forthcoming Digital Omnibus Package, which aims to revise and streamline recent regulations on the digital economy.

Immense research potential

Finck said her main motivation for working in this field is the abundance of unanswered questions and the emergence of entirely new legal frameworks around AI. She hopes society will manage to use AI primarily for positive purposes, such as developing new therapies, improving efficiency, or automating mundane tasks so that people can focus on more creative and fulfilling work.

She also highlighted that the AI Act plays a crucial role in the European single market: without it any company operating across Europe would have to navigate 27 different national regulatory systems instead of one harmonised set of rules.