What Are the Risk Levels in the AI Act?

The AI Act categorises AI systems based on their risk levels, each with distinct regulatory requirements. These include the following:

Unacceptable risk AI systems are outright banned. This includes technologies like social scoring and manipulative AI, which pose significant threats to rights and freedoms.

High-risk AI systems, which are the primary focus of the Act, face stringent regulations. Providers must implement comprehensive risk management throughout the AI system's lifecycle, ensure data governance with relevant and accurate datasets, and maintain thorough technical documentation for compliance checks. They also need to design their systems to support human oversight, record-keeping, accuracy, and cybersecurity. Additionally, they must provide clear usage instructions to downstream users and establish a quality management system.

Limited risk AI systems have lighter requirements, mainly focusing on transparency. This means developers must inform users when they are interacting with AI, such as chatbots and deepfakes.

Minimal risk AI systems are largely unregulated, which include many applications like AI-powered video games and spam filters. However, this landscape is evolving, especially with the rise of generative AI.

Previous
Previous

What Is Termination For Convenience in a SAAS Agreement?

Next
Next

What Is the Gerrish Legal Contract Portal?