AI Act Readiness Checklist
Thank you for your interest. Here is the download link for your free, practical AI Act Readiness Checklist.
Frequently asked questions (FAQ)
-
The EU Artificial Intelligence Act is a new regulation that applies to all AI systems placed on the EU market or used within the EU. It is often referred to as (“AI Act”).
The AI Act introduces a wide range of obligations. The scope of these requirements depends on the organisation’s role (such as provider, deployer or distributor) and the system’s risk level. Non-compliance can result in administrative fines of up to €35 million.
-
The EU AI Act requires organisations to maintain documentation proving the safety, transparency and compliance of their AI systems.
In practice, at minimum, this means:
AI system directory with system name, risk level and other key details;
documentation for safety/governance processes
transparency documentation
AI training material or log of trainings held (AI literacy)
an AI appendix for agreements, where relevant.
For high-risk AI, the documentation requirements are extensive, especially for providers. The key documentation includes technical documentation, risk management files and FRIA. Additionally, a conformity assessment and post-market monitoring plan may be required.
For general purpose AI-models (GPAIs) there are additional requirements. These include documentation covering models, training data, testing, risk mitigation, user guidance and monitoring.
More comprehensive guidance on documentation is available by requesting advisory support or visiting Regulyn Knowledge Center.
-
The EU AI Act classifies AI systems into four risk categories.
Unacceptable Risk
AI systems that threaten fundamental rights are prohibited and cannot be developed or used in the EU.
Examples: AI used for social scoring or exploiting vulnerabilities of specific groups.High Risk
High-risk AI systems must meet strict requirements, including security, governance, transparency and documentation measures. Many AI use cases in healthcare, education, critical infrastructure and human resources fall into this category.
Examples: AI used for cancer detection, or systems used to assess job applicants.Limited Risk
Limited-risk AI systems are subject to transparency obligations. Users must be informed when they are interacting with an AI system.
Example: AI-assisted customer service chatbots
(Note: the public sector faces additional restrictions.)Minimal Risk
Minimal-risk AI systems are not subject to specific obligations under the AI Act.
Example: basic spam filtering. -
Now.
Some obligations, such as the obligation to ensure adequate AI literacy and the prohibitions on certain AI use cases, are already applicable.
The provisions governing high-risk AI systems will enter into force next, with full applicability expected in 2026.
-
Everything starts with a clear overview of how AI is currently used in your organisation. Begin by mapping your existing AI use cases. This allows you to prioritise systems based on their risk level.
A practical way to get started is to build on processes you already have in place, such as data protection risk assessments. It is also important to appoint a responsible person or team for AI governance.
In short: map your AI use, prioritise by risk and establish a governance process.
Tip: If you need clarity or more detailed guidance on AI governance, contact a Regulyn expert.