AI Compliance - EU AI Act requirements

Legal support for AI Act compliance

The EU Artificial Intelligence Act (AI Act) is transforming how organisations in Europe develop, deploy and govern AI.

Many organisations are now asking:

  • What does the EU AI Act require in practice?

  • How do we prepare for the new AI rules in Europe?

  • What are the risks of using AI systems — and how do we mitigate them?

Responsible AI use reduces exposure to enforcement risks (administrative fines up to €35 million) and strengthens trust. For many industries, AI compliance is rapidly becoming standard in tenders, research collaborations and commercial agreements.

Regulyn provides clear, senior-level legal and governance support backed by deep experience in EU-regulated research, healthcare and technology environments where AI compliance requirements are already operational.

Request Advisory Support

What does the EU AI Act require from organizations?

What the EU AI Act Requires

The AI Act introduces obligations for all organisations placing AI systems on the EU market or using them in Europe. The AI Act introduces strict requirements for transparency, accountability, risk management and documentation.

Organisations must already ensure that:

  • prohibited AI systems are not used

  • staff receive adequate AI literacy and risk awareness training

  • obligations for general-purpose AI models (GPAI) are followed.

From 2026 onwards, additional requirements will apply to high-risk AI systems, including risk management, fundamental risks impact assessment (FRIA) and human oversight.

Early preparation is essential to avoid rushed, costly compliance later. Read more about EU AI Act requirements.

How to Comply with the AI Act in Practice?

A strong starting point is to map your current AI use, establish an AI policy, and provide AI literacy training for staff. Early steps should also include defining organisational AI roles, reviewing AI-related contracts and establishing documentation practices aligned with upcoming audits.

We offer clear and expert support across all AI Act compliance requirements.

Explore our AI Act compliance services below and see also our Research Compliance and Agreement Advisory services.

Our AI Compliance Services

  • AI Governance and AI Policy

    We advise organisations on AI regulatory compliance and governance.

    Examples of our support include:

    - AI policy (tailored for organization’s context)

    - a comprehensive AI governance framework

    - structured readiness support for meeting EU AI Act obligations

  • AI Training - AI Literacy

    We deliver tailored AI training programmes for leadership and employees. Ensuring adequate AI literacy is a statutory requirement under the EU AI Act.

    Examples of training services:

    - Leadership responsibilities & EU AI Act (1-hour session)

    - AI literacy training for staff

    - training materials and slide decks for internal use

  • Regulatory Documentation

    We support organisations in preparing the mandatory documentation required under the EU AI Act.

    Examples of our documentation support:

    - Fundamental Rights Impact Assessment (FRIA) for AI systems

    - documentation templates for AI system deployment

  • Specialised AI Advisory support

    We provide case-specific advisory support on AI governance, risk classification and the contractual and ethical considerations related to AI use.

    Examples of our support include:

    - AI system risk-level assessments, role analysis

    - AI in scientific research or healthcare

    - alignment with international AI standards and regulatory requirements

 Frequently asked questions (FAQ)

  • The EU Artificial Intelligence Act is a new regulation that applies to all AI systems placed on the EU market or used within the EU. It is often referred to as (“AI Act”).

    The AI Act introduces a wide range of obligations. The scope of these requirements depends on the organisation’s role (such as provider, deployer or distributor) and the system’s risk level. Non-compliance can result in administrative fines of up to €35 million.

  • The EU AI Act requires organisations to maintain documentation proving the safety, transparency and compliance of their AI systems.

    In practice, at minimum, this means:

    • AI system directory with system name, risk level and other key details;

    • documentation for safety/governance processes

    • transparency documentation

    • AI training material or log of trainings held (AI literacy)

    • an AI appendix for agreements, where relevant.

    For high-risk AI, the documentation requirements are extensive, especially for providers. The key documentation includes technical documentation, risk management files and FRIA. Additionally, a conformity assessment and post-market monitoring plan may be required.

    For general purpose AI-models (GPAIs) there are additional requirements. These include documentation covering models, training data, testing, risk mitigation, user guidance and monitoring.

    More comprehensive guidance on documentation is available by requesting advisory support or visiting Regulyn Knowledge Center.

  • The EU AI Act classifies AI systems into four risk categories.

    Unacceptable Risk

    AI systems that threaten fundamental rights are prohibited and cannot be developed or used in the EU.
    Examples: AI used for social scoring or exploiting vulnerabilities of specific groups.

    High Risk

    High-risk AI systems must meet strict requirements, including security, governance, transparency and documentation measures. Many AI use cases in healthcare, education, critical infrastructure and human resources fall into this category.
    Examples: AI used for cancer detection, or systems used to assess job applicants.

    Limited Risk

    Limited-risk AI systems are subject to transparency obligations. Users must be informed when they are interacting with an AI system.
    Example: AI-assisted customer service chatbots
    (Note: the public sector faces additional restrictions.)

    Minimal Risk

    Minimal-risk AI systems are not subject to specific obligations under the AI Act.
    Example: basic spam filtering.

  • Now.

    Some obligations, such as the obligation to ensure adequate AI literacy and the prohibitions on certain AI use cases, are already applicable.

    The provisions governing high-risk AI systems will enter into force next, with full applicability expected in 2026.

  • Everything starts with a clear overview of how AI is currently used in your organisation. Begin by mapping your existing AI use cases. This allows you to prioritise systems based on their risk level.

    A practical way to get started is to build on processes you already have in place, such as data protection risk assessments. It is also important to appoint a responsible person or team for AI governance.

    In short: map your AI use, prioritise by risk and establish a governance process.

    Tip: If you need clarity or more detailed guidance on AI governance, contact a Regulyn expert.

“We received clear, practical guidance for developing an AI policy that fully meets legal requirements.”

— Chief Technology Officer, Research Sector

*Confidential Reference