The EU AI Act – One Step Closer to Unified AI Regulation

The Purpose 

The European Parliament and Council negotiators have finally reached a provisional agreement on the Artificial Intelligence Act. This regulation, being the first in the fast-evolving AI industry, aims to protect fundamental rights, democracy, the rule of law and environmental sustainability, while enabling innovation and making Europe a leader in the AI field. The law itself is not a world-first, for example, China’s new rules for generative AI went into effect in August. Though, the EU AI Act is the arguably the most sweeping and the most influential rulebook of its kind for the technology. The rules established in the EU AI Act are based on certain AI’s potential risks and level of impact.   

The agreement can be separated into the following main categories: 

Banned applications of AI: 

The aim of banning the following use cases is to reduce harm in areas where using AI poses the biggest risk to fundamental rights, such as healthcare, education, border surveillance, and public services: 

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, race);  
  • untargeted scraping of facial images or video footage to create facial recognition databases. A prominent example of this is Clearview AI, an all-in-one facial recognition platform designed to find individuals by matching faces to a database of more than 20 billion images collected from the Internet; 
  • emotion recognition in the workplace and educational institutions; 
  • social scoring based on social behaviour or personal characteristics; 
  • AI systems that manipulate human behaviour to circumvent human free will; 
  • AI used to exploit the vulnerabilities of people (due to age, disability, social or economic situation). 

There are, of course, exemptions for law enforcement regarding the aforementioned use cases. Negotiators agreed on a series of safeguards and narrow exceptions for the use of remote biometric identification systems in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined criminal offences. Post remote biometric identification can be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime. Real-time remote biometric identification would comply with strict conditions and its use would be limited in time and location, for the purposes of targeted searches of victims (abduction, trafficking); prevention of a specific and present terrorist threat; or the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery). Neither does the AI Act generally apply to AI systems that have been developed exclusively for military and defense uses. 

High risk systems 

For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law), clear obligations were agreed upon. High risk AI systems will have to adhere to strict rules that require risk mitigation systems, high-quality data sets, better documentation and transparency, as well as human oversight.  

High risk systems are divided into two subcategories: 

Firstly, AI systems used in products falling under the EU’s product safety legislation, including toys, aviation, cars, medical devices, and lifts (mostly automated decision-making systems that can affect people’s lives directly). 

Secondly, AI systems falling into eight areas that need to be registered in an EU database: 

  • Biometric identification and categorisation of natural persons; 
  • Management and operation of critical infrastructure; 
  • Education and vocational training; 
  • Employment, worker management, and access to self-employment; 
  • Access to and enjoyment of essential private services and public services and benefits; 
  • Law enforcement; 
  • Migration, asylum, and border control management; 
  • Assistance in legal interpretation and application of the law. 

High-risk systems must also be assessed before market introduction and throughout their lifecycle. The aforementioned applies to, for example. the insurance and banking sectors, AI-controlled vehicles, AI systems used to influence the outcome of elections and voter behaviour. 

Limited risk systems 

The third category is limited-risk AI systems, which will have specific transparency obligations for information of which users must be made aware. Therefore, the keyword for the limited risk systems category is “transparency”. 

Limited risk AI systems include chatbots and image-, audio-, and video-generating AI. To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as well as inform users that they are interacting with an AI system, so that the users could decide whether they wish to continue using it. Generative AI models, such as ChatGPT, must also be designed and trained to prevent the generation of illegal content, and their makers must publish summaries of copyrighted data used for training.  

Measures designed to make it easier to protect copyright holders from generative AI were also included. More specifically – an obligation to “design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law in line with the generally-acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression”. In addition, providers of foundation models “shall assist the downstream providers of such AI systems in putting in place the adequate safeguards referred to in this paragraph.” Although these requirements are not specific to copyright they would seem to apply also to the moderation of outputs of generative AI systems that are copyright infringing. 

For high-impact GPAI models with systemic risk, European Parliament negotiators managed to secure more stringent obligations. If these models meet certain criteria, they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation. 

Audio and video altering programs will not be regulated as they do not pose a high enough risk in the view of the EU. 

Other obligations 

In summary, the AI Act will require foundation models (i.e.GPT-4) and AI systems built on top of them to draw up better documentation, comply with EU copyright law, and share more information about what data the model was trained on. For the most powerful models, there are extra requirements. For example, tech companies will have to share how secure and energy efficient their AI models are. 

The regulation imposes legally binding rules requiring tech companies to notify people when they are interacting with a chatbot or with biometric categorisation or emotion recognition systems. It will also require them to label deepfakes and AI-generated content, and design systems in such a way that AI-generated media can be detected. This is a step beyond the voluntary commitments that the leading AI companies have made such as simply developing AI provenance tools (i.e. watermarking). 

The act will also require all organisations that offer essential services, such as insurance and banking, to conduct an impact assessment on how using AI systems will affect people’s fundamental rights.  

Enforcement 

The AI Act will set up a new European AI Office to coordinate compliance, implementation, and enforcement. This will be the first body to globally enforce binding rules on AI, and the EU hopes this will help it become the world’s go-to tech regulator. The AI Act’s governance mechanism also includes a panel of independent experts to offer guidance on the systemic risks AI poses, and how to classify and test models. 

The AI Board, which would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission and will give an important role to Member States on the implementation of the regulation, including the design of codes of practice for foundation models. Finally, an advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board. 

Europe will also become one of the first places in the world where citizens will be able to launch complaints about AI systems and receive meaningful explanations regarding how AI systems came to the conclusions that affect them.  

Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the severity of the offense and size of the non-complying company.  

The global scale 

By becoming the first to formalise rules around AI, the EU retains its first-mover advantage. Much like the GDPR, the AI Act could become somewhat of a global standard. An important aspect in this regard is the US-EU regulation alignment regarding AI. AI tech companies commonly share a client base both from the EU as well as the US and must therefore comply with both jurisdiction’s laws, as it would not be economically practical to split their product offerings and the respective features. A similar alignment in terms of technological standards and mutual regulation between the US and EU has already happened in the car industry, for example. 

An important notion in this regard is that in terms of EU-s recent technological legislation endeavours, the AI act has received generally positive feedback from the leaders of industry itself. This may be in part caused by the fact that the absence of regulation can give a competitive advantage in AI development to startups that do not have to abide by the moral compass as strongly as the industry leaders do. 

One of the questions arising from this is whether the act will put Europe at a significant disadvantage in terms of innovation in the AI field. To avoid that, the EU is focusing on regulating the use of AI rather than the development of the technology itself. This helps mitigate the abuses of AI regardless of where the company developing it is based and how it is exactly developed. This should avoid putting AI innovation within the EU at a competitive disadvantage. 

The next step 

European regulators’ slow response to the emergence of social media can be taken as a learning opportunity – almost 20 years elapsed between Facebook’s launch and the entry into force of the Digital Services Act. In that time, the EU was also forced to deal with issues created by US platforms, while struggling to foster smaller European challengers. AI technology is moving fast, and this time, regulation is too. 

The AI Act is a binding legislative act that must be applied in its entirety across the EU (as opposed to a directive which sets out a goal that the EU countries must achieve and it is up to the individual countries to devise their own laws on how to reach these goals). 

In April 2021, the European Commission proposed the first regulatory framework for AI. The European Parliament approved its version of the Act in June 2023. The next step was for the European Council, which represents the member states, to agree on the final form of the law which came into fruition on the 9th of December 2023. The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. The entire text will need to be confirmed by both institutions and undergo legal-linguistic revision before formal adoption. The provisional agreement states that the Act should apply two years after entry into force, with some provisions coming into effect at a later date. As work still needs to be done to finalise the details of the new regulation, it is likely that the Act will come into effect in 2026. 

Contact