EU AI Act Enforcement Begins: Compliance Checklist

The AI Act Takes Effect

The EU AI Act, the world’s first comprehensive legal framework for artificial intelligence, has moved from legislation to enforcement. The first provisions — banning prohibited AI practices — took effect in February 2025. By August 2025, obligations for general-purpose AI models applied. Now in 2026, the full risk-based classification system is operational, and businesses deploying AI in the European market must demonstrate compliance or face significant penalties.

This is not abstract policy. If your company uses AI for hiring decisions, credit scoring, medical diagnostics, or any system that affects people’s rights and safety, the AI Act directly applies to your operations.

Understanding the Risk Tiers

The AI Act classifies AI systems into four risk categories, each with different regulatory requirements:

Unacceptable risk (banned): AI systems for social scoring by governments, real-time biometric identification in public spaces (with narrow law enforcement exceptions), manipulation of vulnerable groups, and emotion recognition in workplaces and schools. These are prohibited outright.

High risk: AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. These systems must undergo conformity assessments, maintain risk management systems, ensure data quality, provide transparency to users, and allow human oversight.

Limited risk: Systems like chatbots and deepfake generators that must meet transparency obligations — users must be informed they are interacting with AI or viewing AI-generated content.

Minimal risk: The majority of AI applications — spam filters, AI in video games, inventory management — face no additional regulatory requirements.

Your Compliance Checklist

For businesses deploying high-risk AI systems, here is what you need in place:

1. AI system inventory: Document every AI system your organization uses or deploys, classified by risk tier. You cannot comply with rules you do not know apply to you.

2. Risk management system: Establish a continuous risk management process that identifies, analyzes, and mitigates risks throughout the AI system’s lifecycle.

3. Data governance: Ensure training, validation, and testing datasets are relevant, representative, and as free from bias as practicable. Document data provenance and preparation methods.

4. Technical documentation: Maintain detailed records of system design, development methodology, training procedures, and performance metrics — before the system is placed on the market.

5. Transparency and user information: Provide clear instructions for deployers, including the system’s intended purpose, level of accuracy, known limitations, and human oversight measures.

6. Human oversight mechanisms: Design systems so that humans can effectively oversee operation, interpret outputs, and intervene or halt the system when necessary.

7. Accuracy, robustness, and cybersecurity: Demonstrate that systems perform consistently and are resilient to errors, faults, and adversarial attacks.

8. Conformity assessment: For the highest-risk categories (biometrics, critical infrastructure), obtain third-party conformity assessment from a notified body.

Penalties for Non-Compliance

The AI Act’s enforcement teeth are substantial. Violations involving prohibited AI practices carry fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher. Non-compliance with high-risk requirements can result in fines up to EUR 15 million or 3% of turnover. Supplying incorrect information to authorities carries penalties up to EUR 7.5 million or 1% of turnover.

For SMEs and startups, fines are capped at proportionally lower thresholds, and regulatory sandboxes are available to test innovative AI systems under supervised conditions before full market deployment.

The Transatlantic Divide

The contrast with the United States could not be more pronounced. As of 2026, the US has no federal AI legislation. Executive orders have been issued and rescinded across administrations, leaving a patchwork of voluntary commitments and state-level proposals. American companies operating in Europe must comply with the AI Act regardless, creating a de facto global standard — much as GDPR did for data protection.

For European businesses, the AI Act is not just a compliance burden. It is a competitive framework that builds public trust in AI systems, creates a level playing field against under-regulated competitors, and establishes the EU as the global standard-setter for responsible AI governance. Companies that achieve compliance early will find themselves ahead of the curve as other jurisdictions inevitably follow Europe’s lead.

Was this helpful?

Stay Updated

Get the latest European alternatives and digital sovereignty news.

We respect your privacy. Unsubscribe anytime. No tracking, no spam.