EU AI Act: A Revolutionary Landmark Legislation for Artificial Intelligence 2024

The European Union (EU) is set to adopt the EU AI Act, a groundbreaking legislation that will regulate artificial intelligence (AI) according to its level of risk. The act aims to ensure that AI systems are safe, and transparent and respect human rights and EU values.

The act, which was approved by EU governments on 2 February, is expected to be signed by the European Parliament in April and enter into force in 2026. It is the first of its kind in the world and will set a precedent for other countries and regions.

The act comes at a time when AI is advancing rapidly and becoming more powerful and pervasive. New versions of generative AI models, such as GPT, which powers ChatGPT, developed by OpenAI in San Francisco, California, are expected to launch this year.

These models can create realistic images, code, and video, but also pose potential risks, such as being used for scams and misinformation. Other countries, such as China and the US, have already implemented or are working on their own AI regulations. Last October, President Joe Biden signed the US’s first AI executive order, requiring federal agencies to manage the risks of AI.

How does the EU AI Act classify AI models?

The EU AI Act categorizes AI models based on their potential risks to society and applies different rules and obligations accordingly.

  • Unacceptable risk: EU AI Act systems that are deemed to violate fundamental rights or values, such as those that use biometric data to infer sensitive characteristics, such as people’s sexual orientation, are banned under the act.
  • High risk: EU AI Act systems that are used in critical domains, such as hiring and law enforcement, are subject to strict requirements. Developers must demonstrate that their models are safe, transparent, and explainable to users and that they comply with privacy and non-discrimination laws. They must also register their models in a public database and monitor their performance and impact.
  • Low risk: EU AI Act systems that pose minimal or no risk, such as chatbots and video games, are subject to minimal obligations. Developers must inform users when they are interacting with AI-generated content and ensure that they can opt-out if they wish.
  • General purpose: EU AI Act systems that have broad and unpredictable uses, such as generative models, are regulated in a separate two-tier category. The first tier covers all general-purpose models, except those used only for research or published under an open-source license.
  • These models must be transparent about their training methods, energy consumption, and copyright compliance. The second tier covers general-purpose models that have “high-impact capabilities” and pose a higher “systemic risk”. These models must undergo rigorous safety and cybersecurity testing and disclose their architecture and data sources.
  • Any model that uses more than 1025 FLOPs (the number of computer operations) in training qualifies as high impact. Open-source models in this tier are also regulated unless they are used only for research.

The act applies to models operating in the EU AI Act and any firm that violates the rules risks a fine of up to 7% of its annual global profits.

What are the implications for research and innovation?

The EU AI Act has received mixed reactions from the research community. Some researchers have praised the act for its potential to foster open science and good practice, whereas others have expressed concerns that it could hamper innovation and competitiveness.

The act exempts AI models developed purely for research, development, or prototyping, which means that most academic research will not be affected by the regulations. However, researchers will still have to consider the ethical and social implications of their work and how it could be used or misused by others.

“The act will make researchers think about transparency, how they report on their models and potential biases,” says Dirk Hovy, a computer scientist at Bocconi University in Milan, Italy.

Some researchers worry that the act could create barriers for small companies and start-ups that drive innovation and research, especially in the field of general-purpose AI. Robert Kaczmarczyk, a physician at the Technical University of Munich in Germany and co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit organization aimed at democratizing machine learning, says that the act could make it harder for small companies to adapt and comply with the regulations.

“They might need to establish internal structures to adhere to the laws, which could be costly and time-consuming,” he says.

Some researchers also question the rationale behind regulating AI models based on their size or capability, rather than their use or impact. Jenia Jitsev, an AI researcher at the Jülich Supercomputing Centre in Germany and another co-founder of LAION, argues that there is no scientific basis for defining as dangerous any model that uses a certain amount of computing power.

“Smarter and more capable does not mean more harm,” they say. “It’s as unproductive as defining as dangerous all chemistry that uses a certain number of person-hours.”

Will the act promote open-source AI?

The EU AI Act has been seen as an opportunity to encourage open-source AI, which is the EU AI Act that is publicly available, replicable, and transparent. The act offers exemptions and incentives for open-source models, which could make them more attractive and accessible for researchers and developers.

The act also reflects the EU’s vision of competing with the US and China in the global AI landscape, by leveraging its strengths in open science and collaboration. “The EU AI Act line of reasoning is that open source is going to be vital to getting the EU to compete with the US and China,” says Rishi Bommasani, who researches the societal impact of AI at Stanford University in California.

However, the act does not specify how open-source models will be defined or evaluated, which could create ambiguity and confusion. Bommasani says that legislators intend general-purpose models, such as LLaMA-2 and those from start-up Mistral AI in Paris, to be exempt, but the language of the act is unclear. “There is a lot of room for interpretation and debate,” he says.

How will the act be enforced?

The European Commission will create an AI Office to oversee the implementation and enforcement of the act, with the help of independent experts. The office will develop methods to assess the capabilities and risks of AI models and monitor their compliance and impact. The office will also cooperate with national authorities and international partners to ensure a consistent and coordinated approach.

The act also relies on the cooperation and self-regulation of AI developers and users, who will have to report any incidents or breaches of the rules and take corrective actions. The act also provides mechanisms for users to seek redress or compensation if they are harmed by AI systems.

However, some researchers doubt whether the act will be effective and feasible, given the complexity and diversity of AI models and applications. Jitsev questions whether the AI Office will have the resources and expertise to scrutinize the submissions and claims of AI developers, especially for large and powerful models.

“The demand to be transparent is very important, but there was little thought spent on how these procedures have to be executed,” they say.

Share: