Softomate Solutions logoSoftomate Solutions logo
I'm looking for:
Recently viewed
The EU AI Act and UK Businesses in 2026: A Practical Compliance Roadmap Post-Brexit — Softomate Solutions blog

AI AUTOMATION

The EU AI Act and UK Businesses in 2026: A Practical Compliance Roadmap Post-Brexit

8 May 20266 min readBy Deen Dayal Yadav (DD)

The EU AI Act entered full enforcement in August 2026 and applies to any business, anywhere in the world, that places AI systems on the EU market or puts AI systems into service in the EU. UK businesses that sell software, AI products, or AI-powered services to EU customers are subject to the Act's requirements for those products, regardless of where the business is incorporated or where the AI system is developed. Post-Brexit, the UK has no automatic equivalency with the EU AI Act. UK businesses operating in the EU must comply with EU requirements. This is the practical roadmap for doing so.

Does the EU AI Act Apply to Your UK Business?

The Act applies to your UK business if any of the following are true: you place an AI system on the EU market (sell software with AI features to EU customers), you put an AI system into service in the EU (deploy an AI system used by EU-based employees or operations), or you are a UK-based importer or distributor of AI systems that are then sold in the EU.

The Act does not apply to AI systems developed and used exclusively within the UK for UK customers, to AI systems used for purely personal non-professional activity, or to AI systems used exclusively for military and national security purposes.

For most UK technology companies, digital agencies, and software development firms with any EU client base, the Act creates compliance obligations for their EU-facing AI products and services.

The Risk Classification System

The EU AI Act classifies AI systems into four risk categories, each with different obligations.

Unacceptable Risk (Prohibited)

AI systems in this category are banned outright. They include: real-time biometric identification in public spaces by law enforcement (with narrow exceptions), social scoring systems that rank people based on behaviour, AI systems that exploit psychological vulnerabilities to manipulate behaviour, and AI used to predict criminal activity based on personal characteristics. UK businesses should ensure their AI systems do not fall into these categories for any EU deployment.

High Risk

High-risk AI systems face the most extensive compliance requirements. They include: AI used in critical infrastructure, educational qualification assessment, employment decisions (CV screening, performance monitoring), essential service access (credit scoring, insurance pricing), law enforcement, migration and asylum decisions, and justice administration. UK businesses with AI systems in any of these categories must register their systems in the EU AI Act database, conduct conformity assessments, maintain technical documentation, implement human oversight mechanisms, and log AI system operations.

Limited Risk

AI systems with specific transparency obligations. Chatbots and AI that interact with humans must disclose that the user is interacting with an AI system. Deepfake content must be labelled as AI-generated. For most UK businesses with customer-facing AI, this transparency obligation is the primary compliance requirement from the Act.

Minimal Risk

AI spam filters, AI-powered recommendation systems, and similar low-risk applications face no mandatory requirements under the Act, though voluntary codes of conduct are encouraged.

The Practical Compliance Steps for UK B2B Technology Firms

For a UK software development agency or technology firm with EU clients whose products include AI features, the practical compliance steps are as follows.

Step 1: AI system inventory. List every AI feature or AI system in your products and services. For each: identify whether EU customers use it, classify it against the risk categories, and note the applicable obligations.

Step 2: Transparency compliance for limited-risk systems. For any AI system that interacts with EU users, implement clear disclosure that the system is AI-powered. This is the most common obligation for UK software companies and the lowest-cost to implement.

Step 3: High-risk system assessment. If any of your AI systems fall into the high-risk category, you need a conformity assessment. For most categories, this is a self-assessment producing a technical documentation package that demonstrates compliance with the Act's requirements: risk management system, data governance documentation, technical accuracy and robustness documentation, human oversight mechanisms, and an EU Declaration of Conformity.

Step 4: EU representative appointment. UK businesses without an EU establishment that place high-risk AI systems on the EU market must appoint an EU representative: a natural or legal person in the EU authorised to act on behalf of the UK business in EU regulatory matters.

Step 5: Ongoing monitoring. The Act requires post-market monitoring of high-risk AI systems: tracking performance, collecting and analysing data on system use, reporting serious incidents to national authorities, and updating documentation as the system changes.

The UK AI Regulatory Position

The UK government has taken a sector-led, principles-based approach to AI regulation rather than creating a single comprehensive AI Act equivalent. The UK AI Safety Institute (now the AI Security Institute) focuses on frontier AI risk. Sectoral regulators (FCA, ICO, CQC, Ofcom) apply existing regulatory frameworks to AI within their domains. UK businesses operating only in the UK market face no equivalent to the EU AI Act's mandatory obligations for most AI risk categories.

UK businesses operating in both markets must comply with EU requirements for EU-facing products while navigating UK sectoral guidance for UK-facing operations. The two frameworks are broadly compatible in intent but differ in specific requirements and enforcement mechanisms.

Frequently Asked Questions

If I am a UK business providing AI development services to an EU client, does the AI Act apply to me?

If you are developing an AI system that your EU client will deploy in the EU, you are acting as a provider of an AI system placed on the EU market. The AI Act applies to the provider (you) and to the deployer (your EU client). Both parties have obligations. Agree in your contract with the EU client how compliance responsibilities are allocated between provider and deployer, and ensure your technical documentation meets the Act's requirements for the risk classification of the system you are building.

What are the penalties for non-compliance with the EU AI Act?

Fines of up to €35 million or 7% of global annual turnover (whichever is higher) for prohibited AI system violations. Up to €15 million or 3% of global turnover for other violations. Up to €7.5 million or 1.5% of turnover for providing incorrect information. For UK SMEs, the proportionate enforcement approach means early-stage enforcement will likely focus on disclosure and documentation requirements before financial penalties are applied. However, the risk of being shut out of EU markets for non-compliant products is a more immediate commercial risk than fines.

To discuss building AI systems for EU markets that meet the Act's requirements, see our AI and Machine Learning Solutions service.

Let us help

Need help applying this in your business?

Talk to our London-based team about how we can build the AI software, automation, or bespoke development tailored to your needs.

Deen Dayal Yadav, founder of Softomate Solutions

Deen Dayal Yadav

Online

Hi there 👋

How can I help you?