The European Council has passed the AI Act, proposed first in April 2021, to harmonize regulations on artificial intelligence across the European Union (EU). Under the new law, which is set to take effect next month, AI systems are categorized based on their level of risk, with stricter regulations imposed on those deemed high-risk. Also, it prohibits AI practices such as cognitive behavioral manipulation and social scoring, highlighting the commitment to preventing potential harm to society. The AI Act applies to both private and public actors operating within the EU. However, it reportedly exempts systems used exclusively for research, military, and defense purposes. The legislation supports transparency, protection of fundamental rights, and accountability in AI while encouraging innovation and economic growth in Europe.
The AI Act classifies AI systems into different categories depending on the risk they could pose to society and fundamental rights. ‘Low-risk systems’ are subject to minimal transparency obligations, while ‘high-risk systems’ must meet a set of requirements to access the EU market. Certain AI practices, such as cognitive behavioral manipulation, social scoring, predictive policing based on profiling, and biometric data use for categorizing individuals by race, religion, or sexual orientation are prohibited due to their unacceptable risk. The legislation also addresses the use of General-Purpose AI models. GPAI models are versatile tools with broad applicability, making them valuable assets in the fields of healthcare, finance transportation, and entertainment. While non-systemic risk GPAI models face limited requirements, those posing systemic risks (significant harm or disruption on a large scale) are subject to more stringent rules, emphasizing the need for transparency and accountability.
To ensure effective enforcement, the AI Act establishes various governing bodies, including an AI Office, part of the European Commission responsible for enforcing common rules, a scientific panel of independent experts supporting enforcement activities, an AI Board comprising member states’ representatives to advice and assist on the AI Act’s application, and an advisory forum for stakeholders to provide technical expertise to the AI Board and the Commission. The AI Act introduces significant penalties for non-compliance, including fines calculated as a percentage of the offending company’s last year’s global annual turnover or a predetermined amount, whichever is higher. SMEs and startups face proportional administrative fines. This governance structure aims to ensure consistent and effective application of the regulations.
The regulation mandates increased transparency in developing and using high-risk AI systems, with certain users required to register in the EU database. Users of emotion recognition systems must inform individuals when exposed to such systems. Before deploying high-risk AI systems, public service entities must assess their impact on fundamental rights. The AI Act provides an innovation-friendly legal framework encouraging evidence-based regulatory learning. Also, it establishes AI regulatory sandboxes, meaning, controlled environments for developing, testing, and validating innovative AI systems, including real-world testing. Overall, this stimulates investment and innovation in AI within Europe. Following approval by the Council, the AI Act will undergo further procedural steps, including signature by the Presidents of the European Parliament and Council, before being published in the EU’s Official Journal. Implementation is expected to occur two years after entry into force, with certain provisions taking effect sooner.