On May 21, 2024, the European Council officially adopted the Artificial Intelligence Regulation. Businesses need to adapt to ensure that they are in compliance. Here we take a look at what you can do today on your compliance journey.
Introduction
On May 21, 2024, the European Council officially adopted the Artificial Intelligence Regulation, which came into effect on July 12, 2024.
The purpose of this regulation is to provide a framework for AI systems and promote the trustworthy, human-centered use of AI while ensuring high levels of protection for health, safety, and fundamental rights. Like the GDPR, the AI Act will drastically impact businesses across all sectors.
To achieve this goal, the regulation adopts a risk-based approach to AI regulation. In this approach, there are four levels of risk, with obligations for different actors defined at each level.
- The first level concerns “unacceptable risks”, where these AI systems are prohibited. This includes, for example, social scoring systems or manipulative AI using subliminal techniques.
- The majority of the regulation focuses on high-risk AI systems. For example, emotion recognition, AI systems used in human resources for recruitment, employee monitoring, or AI systems for granting credit and assessing individuals’ creditworthiness are considered high-risk AI systems.
The regulation outlines both the rules for classifying high-risk AI systems (Article 6) and the obligations for involved parties. Whether you are a supplier (developer) of a high-risk AI system or a deployer (user of a third-party developed solution), you must adopt a very strict approach throughout the lifecycle of the AI system. This includes setting up a risk management system, ensuring data governance, and establishing technical documentation.
Another part of the regulation covers limited-risk AI systems, which are subject to transparency obligations. For example, whenever an AI system interacts with an end-user, you must comply with the relevant information requirements set out in the regulation (e.g., chatbots).
Finally, all other AI systems that do not fall into the previous categories are not regulated. However, it is still possible to implement “best practices,” though this is not mandatory.
Additionally, all generative AI systems, such as ChatGPT (capable of responding to various needs both for direct use and integration into other AI systems), are subject to specific requirements.
Most requirements must be met by all businesses within two years. Here are a few tips to help you start preparing for compliance:
- Educate your teams, particularly on the risks associated with AI and the new regulatory framework.
- Map out all your AI systems, including their purposes, risk levels, and your company’s role in relation to each system.
- Establish governance within your company to manage the mapping and risks.
- Prepare for periodic checks to ensure compliance with the AI Act requirements.