!-- No Scroll -->
The Council of the European Union and the European Parliament have reached a provisional agreement with respect to the EU’s Artificial Intelligence Act (AIA). The purpose of the AIA is to encourage investment in artificial intelligence (AI) and foster innovation in the space, while at the same time ensuring that “high-risk” AI products are subject to strict rules and regulations. This is the first major AI legislation to pass in the world and as with the GDPR nearly a decade ago, the EU is once again at the forefront of technology regulation.
Like the GDPR, the AIA will impose regulations on entities that provide (create/produce) and/or deploy (use) AI systems in the EU, regardless of an entity’s location. This means that if you are a U.S. based company that plans to market its AI product in the EU and such product interacts with EU citizens OR you are a U.S. based company that deploys an AI product in the EU, you can be subject to the AIA.
In addition to foundational AI systems (see below), the AIA will essentially use a tiered system of regulation and is primarily focused on AI systems that pose a high or “unacceptable” risk to individuals. AI products that do not pose a risk to individuals or pose only a limited risk will be subject to relatively minor transparency obligations. AI products that pose a high risk to individuals will be subject to strict regulations, while AI products that pose an unacceptable risk to individuals will be outright prohibited. Most everyday AI products, including AI enabled video games or spam filters, which do not interact with individuals or use their behavior for training purposes, will likely not be affected. The additional tiers can best be summarized as follows:
The AIA would also affect foundational AI systems, specifically, large systems such as ChatGPT that can produce images, text, computer code, video, or are capable of lateral conversation. Under the AIA, these systems would be subject to transparency requirements and such requirements become stricter for “high impact foundational AI systems.” Companies may be required to among other things 1) demonstrate that they have analyzed and mitigated potential risk posed by these products; 2) ensure that a product meets certain levels of operability, safety, and predictability; and 3) have prepared and deployed comprehensive instructions for use to downstream providers to maintain compliance. Purveyors of these foundational AI Systems may also be required to register their product with the European Commission.
Companies that use a Prohibited Application can incur a penalty of up to 7% of the offending company’s annual revenue or €35 million, whichever is greater. Companies that otherwise violate the AIA’s provisions can incur a penalty up to 3% of revenue or €15 million, whichever is greater and Companies that provide wrong information can incur a penalty up to 1.5% of revenue or €7.5 million. However, these caps may be proportionally reduced for smaller businesses.
Once approved by the members states, these regulations could go into effect as early as 2025. While that does provide some time and these regulations are not set in stone, they offer great insight as to what will almost assuredly be similar to the final rules by which our industry must abide. Therefore, it is important to consult with legal counsel to ensure that your company’s use of AI products and services is ready to meet the moment.