The EU AI Act : From Logic to Operational Reality
Vikram Korde
Vikram has extensive experience as a global marketing leader in well-known blue-chip FMCG companies leading billion-dollar brands and innovation-driven retail organizations. His philosophy to deliver results : Be Agile, Be customer-obsessed, Follow the evidence & Be creative
EU AI Act compliance for businesses: the law, the logic, and the operational reality
Executive Summary
The EU AI Act is no longer a future regulatory risk. It is active law with phased enforcement already underway. Prohibited practices and transparency obligations are in force. General-purpose AI requirements are live. High-risk system obligations become fully enforceable from August 2026.
For businesses interacting with Europe, the key question is no longer “Are we compliant?” but “Do we know where our AI is operating, what risk category it falls into, and who is accountable?”
The Act regulates impact, not technology. If AI influences hiring, credit, pricing, healthcare, infrastructure, or customer decision-making in the EU, governance obligations follow. Fines can reach 7% of global turnover, but the more immediate risks are procurement friction, investor scrutiny, and reputational exposure.
Compliance under the EU AI Act is not a legal side project. It is an operating model issue. Visibility, classification, documentation, and oversight are becoming structural capabilities, not optional extras.
We are building something practical to make this simpler: a structured, business-ready GRC solution designed specifically for AI governance in regulated environments.
Europe has made a very European move: it took a messy, fast-evolving technology and tried to turn it into a system with duties and consequences. The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is now a global reference point for AI governance, not because Europe owns the technology, but because Europe has decided to own the rules of the road.
The Act entered into force on 1 August 2024. Provisions on prohibited practices and certain transparency obligations have applied since February 2025. Obligations for general-purpose AI models began applying from 2 August 2025. The majority of high-risk system requirements will apply from 2 August 2026, with some transitional compliance periods extending to 2 August 2027.
In plain English: we are no longer talking about a future regulatory wave. We are in it.
If your business interacts with Europe, the question is not whether you “sell in the EU” in the old-fashioned sense. The practical question is whether your AI system affects people in the EU, supports decisions about them, or is placed on the EU market in any form. The EU’s regulatory reach has a habit of travelling. GDPR did not stop at the border. The AI Act will not either.
The AI Act is built on a simple moral claim: when software begins to shape human outcomes at scale, the people deploying it must be able to explain what it is doing, control its failure modes, and remain accountable for harm. That is not bureaucracy. That is adulthood.
What the Eu AI Act is actually trying to do
The cleanest way to understand the EU AI Act is to see it as a three-part bargain.
First, it is a protection project. It targets practices that Europe considers incompatible with fundamental rights and democratic norms, including certain manipulative uses of AI, social scoring, and some biometric uses, with narrowly defined exceptions. The European Commission frames the “unacceptable risk” category in the language of fundamental rights, not market efficiency. That tells you everything about the philosophy behind it.
Second, it is a market-building project. Europe wants one set of harmonised rules so that AI systems are not regulated 27 different ways. Fragmentation is expensive. A predictable compliance baseline, if done properly, reduces friction and gives companies something stable to design against.
Third, it is an industrial policy move. If the EU can set the global template for “responsible AI”, then companies that comply early gain portability. You can call that self-interest. You can also call it strategic positioning.
So is it pro-consumer or pro-business? The honest answer is: it is pro-society, in the European sense of society. It is comfortable imposing constraints if the alternative is letting opaque systems quietly decide who gets a job, a loan, or a benefit with no recourse. It is also trying to avoid the kind of backlash that tends to arrive when technology runs ahead of governance and trust collapses.
The risk model: why the Act regulates impact, not technology
The EU AI Act does not treat “AI” as a single thing. It treats AI as a spectrum of risk based on how and where it is used. The regulatory load increases as the potential for harm increases. The implication many teams miss is simple: you cannot comply in the abstract. You comply system by system, use case by use case.
At one end are uses that are largely left alone (minimal risk). At the other end are prohibited practices (unacceptable risk). In the middle sits the category that will keep compliance leaders busy for the next decade: high-risk systems.
High-risk is not an insult. It is a label for systems used in domains where a wrong, biased, or unexplainable decision can materially damage someone’s life chances: employment, education access, creditworthiness, certain healthcare decisions, critical infrastructure, and public services. In these zones, the Act demands documented risk management, data governance, transparency, human oversight, and post-market monitoring.
The Commission’s public summary is unusually clear about what “risk-based” means in practice. Spam filters and AI-enabled video games are considered minimal risk and face no obligations under the Act, though companies may adopt voluntary codes. On the other hand, systems like chatbots must clearly inform users that they are interacting with a machine, and certain AI-generated content must be labelled.
That sounds simple until it lands inside your organisation. If your customer support runs through an AI agent, disclosure cannot be an afterthought buried in the footer. If your marketing team is generating synthetic content, labelling becomes a workflow, not a stylistic choice. If you sell tools into regulated industries, your clients will ask how you handle this, because they, too, will carry accountability.
What changes for a business that “touches Europe”
Most companies will feel the Act in procurement before they ever feel it in court. European customers will begin asking for evidence, not reassurance. They will ask: is this system high-risk? Where is the documentation? How is oversight designed? What happens when the model fails? Can you demonstrate data quality controls?
If you are a vendor, the sales conversation shifts. Demos become due diligence.
Three scenarios make this real.
A recruitment platform uses machine learning to rank candidates or recommend shortlists. The platform may not technically “decide”, but it influences decisions. Influence is enough when it shapes outcomes. If that platform is used by EU employers, it will likely be treated as high-risk. That means documented risk management, training data scrutiny, oversight design, and ongoing monitoring.
Now add the human tension. Recruiters want speed. Hiring managers want “quality”. HR wants defensibility. Human oversight cannot just be a theoretical override button. It has to work in practice, under time pressure, inside a real organisation.
Credit and insurance scoring models face the same shift. The Act introduces a second axis alongside predictive accuracy: legitimacy. If a model influences access to credit in the EU, you need demonstrable controls around data quality, bias mitigation, documentation, and oversight. The point is not to eliminate risk. The point is to show that foreseeable harm has been considered and mitigated.
Then there is the foundation layer problem. Much of enterprise AI adoption now sits on general-purpose models, especially large language models. Since 2 August 2025, providers of general-purpose AI models must comply with documentation and transparency requirements, including publishing summaries of training data content and cooperating with downstream deployers. Models deemed to pose “systemic risk”, currently presumed above a compute threshold of 10^25 FLOPs, subject to review, face enhanced obligations.
Even if you are not training frontier models, your suppliers might be. That flows downstream into contracts, usage restrictions, documentation packages, and audit questions. Model cards and terms of use are no longer branding accessories. They are compliance artefacts.
You can outsource components. You cannot outsource accountability.
The numbers that should focus a board
The AI Act sets maximum administrative fines of up to EUR 35 million or 7% of total worldwide annual turnover for certain serious infringements. Other categories of non-compliance can reach up to EUR 15 million or 3% of worldwide turnover, with adjusted ceilings for smaller enterprises in specific circumstances.
Those are maximums. Enforcement will vary. But the design is deliberate: the figures are large enough to demand board attention.
Timelines matter because delay is the most common compliance strategy disguised as prudence. The Act entered into force on 1 August 2024. Prohibited practices have applied since February 2025. Obligations for general-purpose AI models have applied since 2 August 2025. High-risk obligations apply from 2 August 2026, with certain transitional elements extending into 2027.
There is political noise around interpretation and guidance. Industry groups have asked for clarification. The Commission has indicated further guidance will be issued ahead of full enforcement. That does not mean pause. It means the window to prepare is closing.
Why organisations will struggle: this is not a legal project
The AI Act exposes a structural weakness inside many companies. It looks like a legal problem. It is not. It is an operating model problem.
You cannot comply with a risk-based regime if you do not know where AI is used across your organisation. AI is embedded in fraud detection, support routing, pricing engines, productivity tools, analytics platforms, often through third-party SaaS. It is also adopted informally by employees experimenting with generative tools.
Shadow AI is not malicious. It is human curiosity meeting powerful tools.
But from a governance perspective, invisibility is risk.
If you are building or deploying regulated AI, you need classification processes, documented risk management, data governance controls, meaningful human oversight design, and monitoring after deployment. Notice what is missing: “appoint a lawyer and hope”.
And then there is politics. Compliance shifts power. It slows certain releases. It adds documentation. Product teams resist. Sales teams worry about friction. Engineers resist bureaucracy. Leadership fears loss of velocity.
The Act does not negotiate with internal incentives.
The market will not either.
The AI registry: the one boring capability you cannot skip
If you want a practical starting point, begin with an internal AI registry. It answers simple but uncomfortable questions: which AI systems are in use; what they do; which risk category they might fall into; who is accountable; what documentation exists; what monitoring exists.
Without it, you are governing blind.
With it, you can prioritise high-risk systems, identify shadow AI, and respond to procurement queries without improvising.
It also forces a cultural shift: AI is not a side experiment. It is infrastructure. And infrastructure needs stewardship.
Coing Soon : A tool to flag risk with preditive precision
The next phase of AI adoption will split companies into two camps. One camp will treat AI as a feature race: move fast, fix later, hope no regulator calls. Their advantage is speed. Until it becomes fragility.
The other camp will treat AI as regulated infrastructure: build governance early, create traceability, embed oversight. Their advantage is scale and trust.
Start with visibility. Build the registry. Classify systems. Assign accountability. Document what matters. Design oversight that works in real life, not in slide decks.
We are working on something robust yet simple to use to help companies with their compliance and governance process with rgeards to the EU AI Act. Watch this space for more information.
References:
- European Commission – EU AI Act Overview
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- European Commission – General-Purpose AI obligations under the AI Act
https://digital-strategy.ec.europa.eu/en/factpages/general-purpose-ai-obligations-under-ai-act
- McKinsey – The European Union AI Act: Time to start preparing
https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-european-union-ai-act-time-to-start-preparing
- Baker McKenzie – EU Artificial Intelligence Act: Key takeaways for HR
https://www.bakermckenzie.com/en/insight/publications/2024/03/eu-artificial-intelligence-act-key-takeaways-for-hr
- Operating Model for AI
https://versatile.consulting/ai-operating-model-hybrid-teams-operating-model/
- AI Pilots in companies
https://versatile.consulting/genai-pilot-failures-ai-readiness-versatile/
The Knowledge Network Operating Model: A Paradigm Shift in Organisational Structure
Discover how the Knowledge Network Operating Model by Versatile Consulting transforms traditional hierarchies into dynamic networks of competence. Explore its key elements, benefits, real-world applications, and challenges to enhance agility, innovation, and employee engagement in your organisation.
Your Product Portfolio is Holding You Back
Organisations must optimize their product portfolios. They face the challenge of balancing product variety with operational efficiency. As companies expand their offerings to meet customer needs, they often create complexity. This article explores how...
What Does a Good Operating Model Look Like?
Discover the key elements of a successful operating model and learn how to design one that enhances efficiency, adaptability, and customer satisfaction.


