top of page
Screenshot 2025-03-03 at 21.08.25.png

The inside story of the EU's AI Act from the person that wrote it - with Gabriele Mazzini (Part 1)

We sat down with Gabriele Mazzini, Architect and lead author of the EU AI Act, and currently Research Affiliate and Fellow at MIT, to get his reflections on the Act.


We covered the origins of the Act, challenges with regulating AI in the EU and specific areas considered 'high-risk', all with a slant on what it means for startups in the ecosystem. Gabriele also shared his critiques of the Act, including where he believes it goes further than necessary.


Below is a write-up in Gabriele's words. It's part 1 of our discussion, with Part 2 to be published in the next couple of weeks.


We hope you enjoy.


---


---


The origins and development of the EU AI Act


The EU AI Act wasn’t built overnight. It emerged from years of debate, negotiation, and compromise. My involvement began in 2017 when I joined the European Commission, eager to explore the intersection of law, policy, and technology.


Before that, I had worked on tech-driven development projects in sub-Saharan Africa. That experience deepened my appreciation for AI’s potential- especially in resource-constrained and challenging environments, where I saw firsthand how science and technology based solutions could facilitate access to essential services such as healthcare or energy or education. When the Commission decided to move forward with AI regulation in 2019, I had already spent years examining its legal implications. That positioned me well to contribute.


The first step was a White Paper, published in February 2020, aimed at stimulating debate and gathering feedback. By April 2021, the Commission had put forward a formal proposal for the AI Act. Then came negotiations- a long, complex process... not without its frustrations. By the time I left in July 2024, the final version had evolved significantly from the initial vision.

 

The challenges of regulating AI in the EU


The AI Act is broad – and perhaps too broad. It attempts to regulate AI across industries, from healthcare to education to law enforcement. This can be referred to as a horizontal approach. While it does not regulate the use of AI in entire sectors as such, through its risk-based approach, it identifies certain applications as high-risk due to the potential of unintended impacts for individuals, such as risk of discrimination. A horizontal approach has both advantages and drawbacks.


One challenge was the varying levels of AI literacy among policymakers. For instance, some struggled to differentiate between automation and intelligence, while others focused disproportionately on extreme, hypothetical risks. Political pressures compounded these issues because, as is typical, there are many varied, nuanced perspectives within European bodies.


The European Parliament and Council each had their own priorities, often already the result of lengthy compromises and negotiations between different policy authorities within each institution. As a result, instead of refining the regulation, they kept adding to it- introducing more complexity, more obligations, and I believe, ultimately, more uncertainty. The result is a law that, while well-intended, may be difficult to implement effectively.

 

High-Risk AI Applications: HR & employment


AI is transforming hiring, firing, and employee evaluation. Recognising the risks, the AI Act classifies these systems as "high-risk."


There is sound reasoning behind this. Past cases, such as Amazon’s 2017 hiring scandal, have shown how AI-driven systems can introduce bias. Transparency and accountability are essential.


While most of the essential compliance requirements for high-risk AI systems (data governance, documentation, human oversight, transparency, etc.) were identified in line with the latest thinking at the time, the question remains whether they are specific enough for the plurality of applications and whether overall they represent a significant burden. Especially in the absence of harmonised standards and technical specifications, startups and smaller HR tech firms may struggle to meet the Act’s stringent requirements, potentially disincentivising smaller companies from engaging in (or developing) innovative AI solutions. In that respect, a more adaptive, sector-specific approach could have mitigated these concerns, but strong regulatory coordination would have been important.

 

AI in Education: the Controversial Ban on Emotion Recognition


One of the more surprising elements of the AI Act is the outright ban on emotion recognition in education (as well as workplace) which I consider problematic due to its rigidity.


It was not part of the Commission’s original proposal but was introduced following the position from the European Parliament. While concerns about privacy and accuracy are valid, an outright ban overlooks the potential benefits.


Emotion recognition AI could support student engagement, personalise learning, and even assist in mental health interventions (the ban would not apply when use is linked to health and safety reasons). GDPR already regulates profiling and data protection, making this additional restriction unnecessary. Instead of blanket prohibitions, well-defined and adaptable safeguards would have been the better path.

 

The EU's AI Act and Global Competitiveness


How does this position Europe in the global AI landscape?


The US has taken a more flexible approach. The US has taken a decision to facilitate innovation to develop with minimal interference. China, while maintaining strict oversight in certain areas, actively supports domestic AI growth and this has become increasingly evident in recent months. Europe, by contrast, has put an emphasis on developing strong safeguards through a regulatory framework that has turned out to be rather complex. While other factors are at play when it comes to Europe’s ability to develop and adopt AI, in my view the question remains whether this regulation may hinder progress and create barriers to entry and adoption for companies attempting to place themselves at the forefront of this burgeoning revolution.


Larger corporations with extensive legal resources may be able to navigate these new rules. But for startups and smaller AI firms, compliance costs could be prohibitive, putting European AI development at a disadvantage.


Risk mitigation is essential, but so is balance and legal certainty and predictability. The AI Act is an ambitious piece of legislation. Its success will depend on how it is enforced and also adapted quickly over time if things do not work as expected, including as regards making sure that  Europe’s AI sector has  a fair chance to compete.


---

We will be back with Part 2 soon!


We are grateful to Gabriele for his time and for his reflections... 🙌

コメント


コメント機能がオフになっています。
Screenshot 2025-03-03 at 21.08.25.png

  Join our 12,100 readers!  

FOLLOW US

©2025 Brighteye Ventures Fund

The fund is managed by Gestron Asset Management SA, a regulated Luxembourg AIFM. 

BRIGHTEYE RESEARCH LONDON LTD - 7 Colville Mews, W11 2DA, London, UK

BRIGHTEYE RESEARCH PARIS SAS - 34 rue de Montpensier, 75001 Paris, France

GESTRON ASSET MANAGEMENT SA - 5 rue Jean Monnet, L-2180 Luxembourg

  • LinkedIn
  • Twitter
bottom of page