EU to Enact Comprehensive A.I. Regulations

The Council of the European Union and the European Parliament have reached a provisional agreement with respect to the EU’s Artificial Intelligence Act (AIA). The purpose of the AIA is to encourage investment in artificial intelligence (AI) and foster innovation in the space, while at the same time ensuring that “high-risk” AI products are subject to strict rules and regulations. This is the first major AI legislation to pass in the world and as with the GDPR nearly a decade ago, the EU is once again at the forefront of technology regulation.

By: Adam Gertz
December 12, 2023

Geographical Scope of the AIA

Like the GDPR, the AIA will impose regulations on entities that provide (create/produce) and/or deploy (use) AI systems in the EU, regardless of an entity’s location. This means that if you are a U.S. based company that plans to market its AI product in the EU and such product interacts with EU citizens OR you are a U.S. based company that deploys an AI product in the EU, you can be subject to the AIA.

AI and Risk

In addition to foundational AI systems (see below), the AIA will essentially use a tiered system of regulation and is primarily focused on AI systems that pose a high or “unacceptable” risk to individuals. AI products that do not pose a risk to individuals or pose only a limited risk will be subject to relatively minor transparency obligations. AI products that pose a high risk to individuals will be subject to strict regulations, while AI products that pose an unacceptable risk to individuals will be outright prohibited. Most everyday AI products, including AI enabled video games or spam filters, which do not interact with individuals or use their behavior for training purposes, will likely not be affected. The additional tiers can best be summarized as follows:

  • Limited Risk: These include AI products that interact with humans, but do not necessarily pose a risk to an individual’s health, safety, or fundamental rights. The most obvious example is an AI chatbot or operator. Under the AIA, companies would be required to disclose to users that they are interacting with AI.
  • High Risk: High Risk AI systems include AI products related to 1) critical infrastructure (such as transportation); 2) educational training such as the scoring of exams; 3) product safety components; 4) employment management (e.g. resume review); 5) credit scoring and other financial services; 6) law enforcement; 7) immigration and border control management; and 8) administering justice (e.g. AI-based adjudication of disputes). The key takeaway here is that “high-risk” AI products are not limited to certain industries, law enforcement, or macro-level infrastructure. For example, if your company uses AI to sort resumes and you have European employees, you may be subject to the AIA. A company that provides or deploys “high-risk” AI products, would be required to maintain strict “human” oversight over its system, conduct security impact assessments (not dissimilar to data privacy impact assessments); and maintain strong data security practices.
  • Unacceptable Risk: The following AI Generated Products are deemed to pose an unacceptable risk to individual rights, health, and/or safety and will therefore be prohibited, except in limited circumstances, generally related to vital law enforcement and public safety purposes: 1) AI Products that engage in cognitive behavioral manipulation; 2) AI products that engage in public facial recognition or mass scraping of facial images from the Internet or CCTV footage; 3) AI products that engage in emotion recognition in the workplace and educational institutions or for law enforcement or border protection purposes; 4) AI products that provide social scoring; 5) AI products that provide biometric characterization to infer sensitive data such as sexual orientation and religious beliefs; and 6) predictive policing (collectively “Prohibited Applications”).

Changes to Foundational AI

The AIA would also affect foundational AI systems, specifically, large systems such as ChatGPT that can produce images, text, computer code, video, or are capable of lateral conversation. Under the AIA, these systems would be subject to transparency requirements and such requirements become stricter for “high impact foundational AI systems.” Companies may be required to among other things 1) demonstrate that they have analyzed and mitigated potential risk posed by these products; 2) ensure that a product meets certain levels of operability, safety, and predictability; and 3) have prepared and deployed comprehensive instructions for use to downstream providers to maintain compliance. Purveyors of these foundational AI Systems may also be required to register their product with the European Commission.

Penalties for Non-Compliance

Companies that use a Prohibited Application can incur a penalty of up to 7% of the offending company’s annual revenue or €35 million, whichever is greater. Companies that otherwise violate the AIA’s provisions can incur a penalty up to 3% of revenue or €15 million, whichever is greater and Companies that provide wrong information can incur a penalty up to 1.5% of revenue or €7.5 million. However, these caps may be proportionally reduced for smaller businesses.

Next Steps

Once approved by the members states, these regulations could go into effect as early as 2025. While that does provide some time and these regulations are not set in stone, they offer great insight as to what will almost assuredly be similar to the final rules by which our industry must abide. Therefore, it is important to consult with legal counsel to ensure that your company’s use of AI products and services is ready to meet the moment.