EU AI Act risk categories explained

1 Introduction

The EU Artificial Intelligence Act (“EU AI Act”) is the first comprehensive legal framework governing AI systems. The EU AI Act is based on a risk-based approach. The EU AI Act classifies AI systems into four categories:

  • Prohibited AI: Practices that pose unacceptable risks, such as social scoring or manipulative systems, are not permitted in the EU.

  • High-Risk AI: Systems used in areas such as employment, education, critical infrastructure, and law enforcement must meet strict requirements on risk management, data quality, documentation, transparency, and human oversight.

  • Limited-Risk AI: Systems that interact with users or generate content must meet transparency obligations.

  • Minimal-Risk AI: Simple, low-risk AI systems.

This tiered approach ensures that regulatory burden matches the potential harm an AI system could cause. This article provides a clear summary of the AI Act risk levels and key obligations and includes clear, real-life examples.

2 AI Act Risk Levels

2.1 Unacceptable Risk - Prohibited

Certain AI practices pose such unacceptable risks that they are banned under the EU AI Act. These prohibited AI systems cannot be placed on the market, put into service, or used anywhere within the European Union.

Prohibited practices (i.e. prohibited AI systems) include:

  • Subliminal manipulation

  • Exploitation of vulnerabilities (e.g. age, disability, socio-economic situation)

  • Social scoring by public authorities

  • Real-time remote biometric identification in public spaces for law enforcement (narrow exceptions exist)

  • Biometric categorisation using sensitive attributes

  • Emotion recognition in workplace and education

  • Indiscriminate scraping of facial images

  • Inference of emotions in law enforcement, border control, workplace (limited exceptions)

Key compliance requirements:

  • Organisations* must ensure these systems are not placed on the market, used or put into service in the EU. The obligation is effective from 02 August 2024.

Real-world examples of prohibited AI:

Workplace Emotion Monitoring: A company cannot deploy AI systems to assess employee motivation, engagement, or dissatisfaction through emotion recognition technology.

Facial Image Scraping: Tech companies cannot place AI-powered software that scrapes facial images from public sources on the EU market.

Important notes:

  • The prohibition applies both the objective (intended purpose) and the effect of the AI system. This means that if your AI system has the effect of manipulation or exploitation, it is prohibited even if the effect was unintended.

2.2 High Risk AI Systems - Extensive Compliance Obligations

High-Risk AI systems are subject to extensive obligations under the EU AI Act.

What makes an AI system High-Risk?

There are two pathways for classifying the AI system as high-risk. The first one is to define if the AI system is intended as a safety component of products covered by EU harmonisation legislation or if products themselves are covered by such legislation. These systems are classified as high-risk AI systems. Additionally, specific application areas listed in Annex III are classified as high-risk AI systems:

  • Biometric identification

    • (e.g.) remote biometric identification

  • Critical Infrastructure

    • management of data centers, water supply

  • Education and vocational training

    • assessment of students

  • Employment and recruitment

    • employee or applicant evaluation

  • Essential Services

    • eligibility for public benefits, evaluation of creditworthiness

  • Law enforcement

    • risk of re-offending, evidence evaluation

  • Migration and border control

    • visa or asylum application evaluation

  • Administration of justice and democratic processes

    • legal research, voting

Key compliance requirements for High-Risk AI Systems:

The EU AI Act poses extensive compliance obligations that cover risk management, documentation, technical measures, cybersecurity and monitoring. Many of the obligations are placed upon the provider of the high-risk AI system.

  • Risk management system (Article 9)

  • Data governance and quality requirements (Article 10)

  • Technical documentation (Article 11 and Annex IV)

  • Automatic logging (Article 12)

  • Transparency and information to deployers (Article 13)

  • Human oversight (Article 14)

  • Accuracy, robustness, and cybersecurity (Article 15)

  • Quality management system (Article 17)

  • Documentation keeping (Article 18)

  • Automatically generated logs (Article 19)

  • Corrective Actions and cooperation with authorities (Articles 20, 21)

  • Fundamental Rights Impact Assessment (Article 27, for certain organisations)

  • Conformity assessment (Article 43)

  • CE marking (Article 48)

  • Registration to EU database (Article 49)

  • Transparency (Article 50, providers and deployers of certain AI systems)

  • Post-market monitoring (Article 72)

  • Serious incident reporting (Article 73).

Real-world examples of High-Risk AI systems:

Public Benefits Screening: A government agency uses AI to review documents determining applicant’s qualification for social benefits programs.

Medical Device AI: A company develops an AI software to evaluate the severity of patient depression or anxiety during a real-time discussion. This software simultaneously qualifies as a high-risk AI system and a medical device.

Recruitment Technology: An organisation uses AI to screen job applications, rank candidates, or predict performance.

Important notes:

  • An Annex III AI system is not considered high-risk if it performs a narrow procedural task, improves a human activity, or detects decision-making patterns without materially influencing outcomes, provided human assessment is not replaced. (De minimis exception)

  • Many of the extensive obligations are placed on the provider of the high-risk AI system.

2.3. Limited-Risk - Transparency Obligations

What are limited-risk AI systems?

While not explicitly defined in the EU AI Act, these typically include AI systems that interact with humans or generate synthetic content. AI systems posing limited risks are subject to transparency obligations.

Key Compliance obligations:

AI systems posing limited risks are mainly required to comply with transparency obligations.

When AI interacts with users, organisations must ensure that:

  • users are informed that they are interacting with AI, unless it is obvious;

  • users are informed if they are subject to motion recognition or biometric categorization.

When AI is used for content creation, organisations must ensure:

  • AI-generated content is labelled clearly

  • deepfakes and AI manipulated text published to inform public is disclosed (limited exceptions)

Real-world examples of Limited-Risk AI:

Customer Service Chatbots: A company launches an AI chatbot to handle generic customer inquiries.

Government Website Assistants: A public organization implements an AI chatbot helping citizens navigate services and information.

Important notes:

  • Transparency obligations apply to providers (AI systems interacting with natural persons, generating content) and deployers (emotion recognition, biometric categorization, deepfakes)

  • Transparency obligations may also rise from other EU and national regulation (e.g. GDPR)

2.4. Minimal-Risk AI - No specific obligations

What are Minimal-Risk AI systems?

The EU AI Act does not pose any specific requirements for AI Systems posing minimal risk. These AI systems pose minimal risks or no risk to safety, health or fundamental rights.

Key compliance obligations:

There are no specific obligations for minimal risk AI systems. Organisations may, however, voluntarily adopt codes of conduct and follow best practices for responsible AI development.

Real-world examples of minimal-risk AI:

  • Spam Filters: Email systems using AI to detect and filter spam

  • Enhanced Video Editing: Creative tools using AI for video enhancement or editing

Important notes:

  • Even without AI Act-specific obligations, minimal-risk AI systems remain subject to other applicable regulations, such as GDPR and sector-specific regulations.

3 How to classify an AI system?

Organisations can follow a simple decision tree for preliminary classification of their AI systems.

  1. Does the AI system fall under prohibited practices?

    ✅ Suspend its use immediately and ensure that no other prohibited AI systems are deployed or developed. Report possible non-compliance internally (e.g. to compliance officer).

    ❌ Proceed to the next step.

  2. Is it a safety component of a product in a harmonized area (e.g. medical devices, toys, aviation, lifts)?

    ✅ The system is likely high-risk AI system. Ensure compliance with the obligations associated with your role (provider, deployer etc.).

    ❌ Proceed to next step.

  3. Is it used in a high-risk area mentioned in Annex III (education, HR, law enforcement)?

    ✅ The system is likely high-risk AI system. Ensure compliance with the obligations associated with your role (provider, deployer etc.). Double check if the de-minimis exception applies.

    ❌ Proceed to next step.

  4. Does it interact with people or generate content?

    ✅ It is likely a limited-risk AI system. Ensure compliance with transparency obligations.

    ❌ Proceed to next step.

  5. Does it pose risks higher than “minimal” to people’s health, safety or rights?

    ✅ The AI system could be a limited-risk system.

    ❌ Likely a minimal-risk AI system. There are no EU AI Act-specific obligations, but other regulatory and ethical requirements still apply.

4 Summary

The EU AI Act represents a fundamental shift in how artificial intelligence is regulated globally. Its risk-based approach provides proportionate regulation: stricter rules for systems that could cause greater harm, lighter requirements for lower-risk applications. Any organisation operating in the EU market must understand these key requirements and comply with them to avoid substantive penalties.

Organisations that proactively assess their AI systems, understand their classification, and implement required controls will be best positioned for compliance and competitive advantage in the European market. The journey to EU AI Act compliance begins with understanding which rules apply to your specific AI systems. With proper classification and systematic implementation of requirements, organizations can confidently deploy and develop AI while meeting regulatory obligations.

Last updated on 02 December 2025.

This article is part of Regulyn’s Knowledge Centre and is reviewed for legal accuracy, clarity and current regulatory alignment before publication.

Author

Katri Harjuveteläinen, LL.M., CIPP/E, CIPM, FIP

Legal Counsel specialising in AI compliance, research regulation and complex agreements. Katri advises organisations on implementing the EU AI Act, high-risk AI documentation, and governance frameworks in healthcare, research, and technology sectors.

For personalized advice on EU AI Act compliance, system classification or required documentation, contact Regulyn for expert support. For further information on AI Compliance, you can also explore Regulyn’s Knowledge Centre.

* “Organisation” means the legal and natural persons, including public bodies, covered by the EU AI Act.

Next
Next

Can pseudonymisation make data anonymous? New court ruling in EU.