The European Union has adopted a groundbreaking AI law known as the AI Act, which aims to balance innovation with protecting fundamental rights and sustainability. The Act establishes a risk-based framework, with obligations varying based on the potential impact of AI applications. High-risk AI products must undergo risk assessments before release.
Provisions in the Act include banning certain risky or unethical AI practices, such as discriminating biometric categorization systems and social scoring systems. For law enforcement’s use of biometric identification, strict safeguards must be met. Generative AI models are subject to transparency requirements and must label artificially generated content.
Other countries are also moving towards AI regulation, following the EU’s lead. The US, China, India, and other nations are developing or implementing their own AI rules. The EU AI Act is expected to become law in mid-2026, with penalties for non-compliance.
Critics have raised concerns about the Act’s scope and potential loopholes, suggesting that big tech lobbying may have influenced its effectiveness. Despite criticisms, the Act represents a significant step towards regulating AI in the EU and setting global standards.