AI ACT: everything you need to know

Giovanni Piccirillo's picture

Giovanni Piccirillo

Share article on:

notion-logogithub-logotwitter-logolinkedin-logoinstagram-logo
The European Union has officially adopted the AI Act, a comprehensive legal framework that regulates the development and use of artificial intelligence (AI) technologies. Initially announced in December 2023 and entering into force on August 2, 2024, the AI Act introduces a risk-based classification of AI systems and establishes strict compliance requirements. Full applicability will begin on August 2, 2026, although some provisions, such as those banning certain high-risk AI practices, will be enforced as early as February 2, 2025.  The AI Act aims to promote trustworthy AI by ensuring that AI systems uphold fundamental rights, security, and ethical principles while mitigating risks associated with powerful AI models. Companies failing to comply with the regulation face severe fines of up to €35 million or 7% of their global annual turnover. The EU AI Office will oversee enforcement, ensuring compliance with legal requirements.  Scope and Risk-Based Classification The AI Act applies to providers, developers, and users of AI systems operating in the EU, regardless of their geographic location. The regulation adopts a risk-based approach, categorizing AI applications into four tiers:  Prohibited AI Systems – These include AI models that manipulate human behavior, exploit vulnerabilities, conduct biometric profiling, enable social scoring, or carry out real-time biometric surveillance in public spaces. High-Risk AI Systems – These include AI applications in biometrics, critical infrastructure, employment, finance, law enforcement, migration, and public services. Such systems must meet strict compliance requirements, including conformity assessments, transparency obligations, and registration in the EU database. Limited-Risk AI Systems – These require transparency measures, such as clear labeling when AI interacts with humans. Minimal-Risk AI Systems – These pose no significant risk and are largely unregulated. Governance and Enforcement The AI Act establishes a multi-tiered governance system involving:  The EU AI Office, responsible for monitoring compliance. The AI Council, composed of national representatives ensuring regulatory consistency. National authorities, designated by each Member State to oversee AI applications at the local level. Impact on Businesses Companies developing or deploying AI within the EU must prepare for compliance with strict documentation, auditing, and transparency requirements. The regulation is expected to influence global AI standards, much like the GDPR did for data protection.  Conclusion With its phased implementation, the AI Act represents a turning point for AI governance, shaping how businesses, governments, and developers engage with artificial intelligence in Europe and beyond. Understanding and complying with the AI Act is crucial to avoid penalties and ensure ethical AI deployment in the coming years.

In recent years, common knowledge regarding the widespread use of artificial intelligence (AI) and machine learning technologies has grown enormously, and consequently the demand for ethical guarantees and clarity on how AI-based systems are employed has also increased. For this reason, the European Union, initially announcing in December 2023 that it had reached a preliminary agreement[1] on the fundamental contents of the future law on artificial intelligence, approved and made official the AI ​​ACT with its entry into force on 2 August 2024. The new rules will, however, become fully applicable only from 2 August 2026. Some provisions, however, are operational as early as 2025, such as those relating to prohibited artificial intelligence systems (2 February 2025) and those relating to the rules on governance, which, however, will come into force on 2 August 2025

The AI ​​Act aims to create a comprehensive set of global AI legal rules" to "promote trustworthy AI in Europe and the rest of the world, ensuring that AI systems respect fundamental rights, security and ethical principles, and addressing the dangers of very powerful and influential AI models. The EU Office for AI[2] will monitor the implementation and enforcement of the AI ​​Act. The consequences for failure to comply with the rules can be severe, with fines ranging from €35 million or 7 percent of worldwide turnover to €7.5 million or 1.5 percent of turnover, depending on the violation and the size of the company. Therefore, it is essential that providers, developers and users of AI models or AI-based systems understand the AI ​​Act and its impact on their businesses.

AI Act basics

First, the proposed AI law applies to providers and developers of AI systems offered for sale or used within the EU (including AI technologies provided free of charge), regardless of whether such providers or developers are based in the EU or in another country. This could mean that, similar to the EU General Data Protection Regulation (GDPR)[3], US-based companies that sell or offer AI-based technology within the EU may be subject to the law's possible sanctions for non-compliance. The law does not separately address AI systems that handle the personal information of EU citizens; However, please note that applicable EU data protection, privacy and confidentiality laws apply to the collection and use of such information for AI-based technologies.

The AI ​​Act takes a hazard-based approach to classifying AI systems into four tiers. These levels generally correspond to 1) the sensitivity of the data involved and 2) the specific use case or application of the AI.

The Artificial Intelligence Regulation establishes a strict framework for the use of AI, expressly prohibiting practices that present a “unacceptable danger". These prohibitions aim to protect the fundamental rights, safety and integrity of individuals. The following are the categories of AI systems considered prohibited:

  • AI systems that use manipulative, deceptive or subliminal techniques to distort the behavior of individuals and compromise their informed decision-making, causing significant harm. This includes the use of covert techniques to influence decisions in ways that otherwise would not have been made.
  • Systems that exploit vulnerabilities related to individuals' age, disability or socioeconomic circumstances to distort their behavior, causing significant harm. This ban aims to protect the most vulnerable people from abuse and discrimination.
  • Biometric categorization systems that infer sensitive attributes such as race, political opinions, union membership, religious or philosophical beliefs, sex life or sexual orientation. This excludes labeling or filtering of legally acquired biometric data sets and cases where law enforcement categorizes biometric data for specific investigations.
  • “Social scoring” systems that rate or rank individuals or groups based on social behavior or personal traits, resulting in harmful or unfavorable treatment.
  • Systems that assess the risk of an individual committing criminal offenses based solely on profiling or personality traits, except when used to supplement human assessments based on objective, verifiable facts directly related to criminal activity.
  • Compiling facial recognition databases by untargeted extraction of facial images from the Internet or surveillance camera footage.
  • Systems that infer emotions in workplaces or educational institutions, except for medical or safety reasons.
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except in specific and limited cases, such as searching for missing persons, preventing imminent threats to life or terrorist attacks, and identifying suspects in serious crimes.
AI Risk Pyramid according to the Artificial Intelligence ActAI Risk Pyramid according to the Artificial Intelligence Act

AI practices that present an unacceptable danger are subject to the most severe penalties and can expose companies to fines of 35 million euros, or 7 percent of a company's annual turnover, whichever is greater.

The next category of AI systems envisaged and regulated by the AI ​​ACT regulation is "at high risk" (the high-risk) which appears to be much more extensive than the previous category and probably includes many AI applications already in use today.

An AI system is considered high risk if:

  • It constitutes a safety component of a product or is itself a product subject to the harmonization regulations listed in Annex I of the Regulation.
  • It requires a third-party conformity assessment for placing on the market, as required by the harmonization regulations indicated in Annex II.

High risk applications fall into the categories listed in Schedule III of the AI ​​Act and may include:

  • Biometrics
  • Critical infrastructures
  • Vocational education and training
  • Employment, worker management and access to self-employment
  • Access to essential private services and essential public services
  • Law enforcement activities
  • Migration, asylum and border management
  • Administration of justice and democratic processes

The AI ​​ACT regulation clarified, however, that some AI systems, while falling within the Annex III categories, are not considered high risk if they do not pose a significant risk to the health, safety or fundamental rights of individuals. Furthermore, the regulation continues by establishing that an AI system is not considered high risk if used exclusively for:

  • Perform a purely incidental task.
  • Improve the performance of a human activity without completely replacing it.
  • Detect patterns or trends based on past data, without directly influencing automated assessments that have a significant impact on individuals.
  • Perform a task that is redundant to a core activity described in Annex III.

However, the AI ​​ACT regulation has given the European Commission and the AI ​​Office 18 months to develop a practical guide to clarify the boundaries of high-risk technologies, which users can follow to ensure compliance with these requirements

At a minimum, developers and users whose technology falls into the high risk category must be prepared to comply with the following requirements of the AI ​​Act[4]:

  • Clearly indicate the supplier's contact information on the system, packaging, or documentation.
  • Have a compliant quality management system.
  • Maintain necessary documentation and records.
  • Undergo the relevant conformity assessment procedures.
  • Draw up an EU declaration of conformity.
  • Affix the CE marking.
  • Fulfill registration obligations in the centralized EU database.
  • Take corrective action and provide requested information.
  • Demonstrate compliance upon request of the relevant authorities.
  • Ensure that the high-risk AI system complies with accessibility requirements.

Immediately after the discipline of the category high-risk, the AI ​​Act imposes transparency obligations on the use of AI and establishes some limitations on the use of models Generic AIs. For example, the Regulation requires that AI systems intended to interact directly with humans are clearly marked as such, unless this is obvious under the circumstances. Additionally, generic AI models with "high impact capability” (defined as general purpose AI models where the cumulative amount of processing used during training measured in floating point operations per second is greater than 10^25 floating point operations (FLOPs)) may be subject to additional limitations. Among other requirements, providers of such models must maintain technical documentation of the model and training results, adopt policies to comply with EU copyright laws, and provide a detailed summary on the content used for training to the AI ​​Office.

Finally, the AI ​​Act Regulation provides for the establishment of a structured governance structure, distributed between authorities at the European Union level and national authorities. This institutional architecture is designed to ensure uniform and effective application of the regulatory framework.

Title VII of the AI ​​Act (articles 64 et seq.) establishes a European governance system, focused on the following entities:

  • The Office for AI which plays a crucial role in developing the Union's expertise and capabilities in artificial intelligence[5]. Its functions include monitoring the application of the Regulation and providing technical support to Member States.
  • The AI Council, composed of one representative from each Member State[6]. It provides advice and assistance to the Commission and Member States, promoting consistent interpretation and application of the Regulation.
  • The Consultative forum providing technical expertise to the AI ​​Council, with a composition that reflects the diversity of stakeholders, including representatives from industry, startups, SMEs and academia[7].
  • and Independent scientific expert group supporting the Office for AI, in particular in monitoring general purpose AI models[8]. Member States can also make use of their advice for the implementation activities of the Regulation.

Member States are then given the responsibility of designating their national authorities, responsible for the implementation and application of the Regulation.

Each Member State must designate at least:

  • a notifying authority.
  • a market surveillance authority.

These authorities play a key role in overseeing compliance with the Regulation, providing advice and supporting businesses, with a particular focus on SMEs and startups, in line with guidance from the AI ​​Council and the European Commission.

Conclusion

Regardless of whether it applies to an AI technology provider, developer or implementer, the AI ​​Act has the potential to significantly change the way businesses operate within the EU and globally. Leaders should take time to understand how these regulations may affect them and what strategies they can implement to ensure they can operate in compliance with their obligations as the new regulation comes into force.

_______________________________________________________________________________

[1] For further information on the agreement, see the link: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

[2] For further information see the link https://digital-strategy.ec.europa.eu/en/policies/ai-office

[3] You know: Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data

[4] Cfr. art. 16, AI ACT (Obligations of suppliers of high-risk artificial intelligence systems)

[5] Cfr. article 64, AI ACT

[6] Cfr. article 66, AI ACT

[7] Cfr. article 67, AI ACT

[8] Cfr. article 68, AI ACT