EU’s AI Act: Revealing Key Features

What is the EU AI Act?

The AI Act wants to keep important things safe from AI risks, like rights, democracy, laws, and the environment. It also wants to help Europe lead in AI while encouraging new ideas.

Recently, European Union (EU) leaders agreed on new rules for using artificial intelligence (AI). These rules, called the AI Act, make the EU a pioneer in regulating AI, putting it ahead of the US, China, and the UK.

The AI Act focuses on making sure data is good, being clear about how AI works, having people in charge, and making sure there’s responsibility.

The European Parliament will vote on the AI Act next year, and if it passes, these rules will start in 2025.

AI Act

Key Features:

A risk-based approach-

The proposed legislation outlines four categories of risk associated with AI: Unacceptable risk, High risk, Limited risk, and Minimal or no risk.

  • Unacceptable risk: The new AI Act stops harmful AI that could hurt people’s safety, jobs, and rights. It says no to tricks that control behavior, taking people’s facial pictures without permission, guessing emotions at work or school, scoring people, using personal info like religion or sexuality, and some types of police predictions.
  • High risk: The AI Act aims to address systems that could seriously harm health, safety, rights, democracy, elections, and the rule of law. It seeks to regulate and safeguard against potential negative impacts of artificial intelligence on these crucial aspects, ensuring a responsible and secure integration of AI technologies in various domains. These systems will be permitted, however, they will be required to meet a specific set of criteria and fulfill certain obligations to gain entry into the EU market.
  • Limited risk: Under the AI Act, AI systems marked as “limited risk” need clear explanations. For things like chatbots, it’s important users know they’re talking to a machine. This way, they can choose if they want to keep chatting or stop, making informed decisions about their interactions with AI.
  • Minimal or no risk: The AI Act suggests that safe AI, like in video games and spam filters, can be used freely. These AI types don’t harm people’s rights or safety, so they’re not regulated. They can be used without any limits or rules because they’re considered low-risk and won’t cause problems for citizens.

Exceptions to law enforcement

The AI Act sees remote ID as risky and needs strict rules. In public places, cops can’t use it, except for finding missing kids, stopping terror, or catching serious criminals.

AI systems with broad applications and foundational models

The AI Act introduces new rules, especially for risky systems using general-purpose AI (GPAI). It also covers regulations for foundation models—big systems that do many tasks like making text, images, and code. These models need to be transparent before being sold. These rules aim to handle diverse AI uses and ensure safety.

Consequences for failure to comply

The AI Act will enforce penalties on companies that fail to comply, which will be calculated as a percentage of their worldwide annual revenue or a fixed sum, whichever is greater.

These penalties vary from 7.5 million euros for providing inaccurate information to 35 million euros for breaching the Act’s restrictions on certain AI uses.

Additionally, the EU will establish the EU AI Office, responsible for overseeing and penalizing those who violate the legislation.

You may also like: How to Create a Winning App Roadmap for 2024: A Step-by-Step Guide

Share:
Comments: