EU's AI Act Explained

product image
user

Team S

Posted on 14 Jul 2024. London, UK.

The European Union's Artificial Intelligence Act (AI Act) is a comprehensive regulation designed to ensure the ethical and safe development and use of AI technologies across the EU. The AI Act is now published by the EU. Here is a summary of its key points and practical information:


Objectives and Scope

  1. Ensuring Trustworthy AI: The primary goal of the AI Act is to foster the development and use of AI systems that are safe, transparent, and respectful of fundamental rights. It aims to create a regulatory framework that balances innovation with the protection of public interests and fundamental rights.
  2. Risk-Based Approach: The AI Act categorizes AI systems into different risk levels:
  • Unacceptable Risk: Practices that pose a clear threat to safety, livelihoods, and rights are banned. These include social scoring by governments and certain uses of real-time biometric identification.
  • High Risk: AI systems that significantly impact people's lives (e.g., those used in critical infrastructure, employment, and law enforcement) are subject to strict requirements. These include robust risk management systems, high-quality data sets, and detailed documentation.
  • Limited and Minimal Risk: These AI systems have fewer requirements but must still adhere to transparency obligations, such as informing users they are interacting with AI.


  1. Conformity Assessments: High-risk AI systems must undergo rigorous conformity assessments before being marketed. This ensures they meet the necessary standards for safety and performance.
  2. National Authorities and European Artificial Intelligence Board: Each EU member state will appoint national authorities to oversee the implementation and enforcement of the AI Act. The European Artificial Intelligence Board will coordinate these efforts to ensure consistency across the EU.

Supporting Innovation

  1. Regulatory Sandboxes: To encourage innovation, the AI Act includes provisions for regulatory sandboxes, allowing developers to test AI systems in a controlled environment under regulatory supervision.
  2. Support for SMEs: Recognizing the challenges faced by small and medium-sized enterprises (SMEs), the AI Act provides measures to support their compliance, such as reduced fees for conformity assessments.


Fundamental Rights and Ethical Considerations

  1. Protection of Fundamental Rights: The AI Act is designed to safeguard fundamental rights, including privacy, non-discrimination, and the right to a fair trial. It ensures that AI systems are developed and used in ways that do not infringe on these rights.
  2. Ethical AI Development: The Act promotes the development of AI technologies that contribute positively to society and the environment. It encourages the use of AI to enhance human well-being and societal good.


Practical Implications

  1. Compliance Costs: For businesses and public authorities developing or using high-risk AI systems, compliance costs are estimated between €6,000 and €7,000 per system, with additional annual costs for human oversight ranging from €5,000 to €8,000. Verification costs could add another €3,000 to €7,500.
  2. Transparency and Accountability: AI systems interacting with humans, such as chatbots, must clearly inform users they are interacting with AI. Systems generating deep fakes must disclose their artificial nature.


Legal Basis and Harmonization

  1. Legal Framework: The AI Act is based on Article 114 of the Treaty on the Functioning of the European Union (TFEU), ensuring the establishment and functioning of the internal market by harmonizing rules for AI systems.
  2. Avoiding Fragmentation: By setting harmonized rules, the AI Act aims to prevent the fragmentation of the internal market, ensuring legal certainty and a level playing field for AI developers and users across the EU.


The AI Act sets a global standard for AI governance, promoting innovation while safeguarding ethical standards and public trust. It ensures that AI technologies are developed and used in ways that are safe, transparent, and respectful of fundamental rights, fostering a trustworthy AI ecosystem within the EU.


Visit the official document on EUR-Lex

Deloitte summary


#ArtificialIntelligence #EU #AI #Regulation

Comments