SAKET BIVALKAR

Saket Bivalkar is the Managing Partner of Versatile Consulting, a boutique consultancy that designs operating models for multinationals. His work focuses on building models that hold across geographies while allowing for local adaptation. Recent engagements include 63-country and 42-country operating model transformations in regulated industries.

He is based in Spain and can be reached at saket@versatile.consulting.

Why AI Use Cases Fail Without an Enterprise AI Operating Model.

Most AI programmes improve isolated functions, but fail to improve customer outcomes. The missing piece is not another pilot. It is the operating model that connects internal activity chains to enterprise value.

Most enterprise AI programmes do not fail because the models are weak.

They fail much earlier.

They fail because organisations identify AI opportunities inside functions, fund them inside functions, implement them inside functions, and then expect customer outcomes to improve across the enterprise.

That rarely happens.

What usually happens is this: legal speeds up contract review, operations automates an exception queue, finance improves reconciliation, customer service gets a summarisation tool, and every team can point to a local efficiency win.

And yet when leadership asks the harder question, what has actually changed for the customer, the answer is still unclear.

Not because the technology failed. Because the organisation did.

The mistake most companies make

When companies go looking for AI use cases, they usually start where the work is visible.

A process owner maps a workflow. A team identifies manual effort, repetitive tasks, approval bottlenecks, or poor handovers. A list of opportunities emerges. Automate document review. Flag exceptions earlier. Route tickets more intelligently. Improve forecasting. Reduce admin.

None of this is wrong.

The problem is how those opportunities are evaluated.

They are typically assessed as isolated efficiency plays, inside the boundaries of the function that surfaced them. The business case is written around time saved, headcount avoided, processing speed, or local productivity. That makes the use case easier to sponsor, but it also shrinks its meaning.

Because the value rarely lands where the work lives.

Why customer experience does not improve

A slow contract review is not only a legal problem. It delays onboarding, revenue activation, and customer confidence.

An unresolved operations exception is not only an operations problem. It becomes a service issue, a complaint, a broken promise, or a lost account.

A late finance close is not only a finance problem. It means commercial teams are making decisions with stale numbers and customers feel the consequences in pricing, service, or renewal conversations.

This is where many AI programmes lose the plot.

They optimise the activity but fail to trace the consequence.

The automation worked. The customer experience did not.

That is the real problem. Not poor models. Not lack of ambition. It is a coordination failure.

The pocket problem

AI use cases are identified in pockets, funded in pockets, implemented in pockets, and measured in pockets. Each pocket can produce a visible result. But because the organisation has not redesigned the connected workflow around it, those results do not compound.

Instead of enterprise transformation, you get local optimisation.

Instead of a better customer journey, you get a faster internal fragment.

This explains why so many organisations can say they have been doing AI for two or three years and still struggle to point to a measurable shift in customer satisfaction, retention, revenue quality, or service reliability.

The issue is not effort. The issue is architecture.

What a Digital Twin of the Organisation changes

This is exactly why a Digital Twin of the Organisation matters.

A DTO helps the organisation see how work actually flows across teams, systems, decisions, and geographies. More importantly, it makes visible where a delay, handover failure, or design flaw in one part of the enterprise creates consequences somewhere else.

The most important AI use cases are not always the ones with the clearest local efficiency gain. They are often the ones that sit on an activity chain that eventually shapes a customer outcome.

Once you map the chain properly, the business case changes.

A use case that looked like a small operational automation can turn out to be a customer retention lever. A workflow issue that looked like a back-office problem can turn out to be a commercial performance problem.

What an enterprise AI operating model actually does

An enterprise AI operating model is not a governance layer bolted onto an existing org chart. It is the design of how AI opportunities are identified, prioritised, built, coordinated, governed, and scaled across the business so that individual improvements add up to enterprise value.

That means answering questions that most use case lists never force the organisation to confront.

  • Who owns the end-to-end process when it crosses functions?
  • Who decides whether a local automation should trigger changes in adjacent teams?
  • How are data models, workflows, and decision rights aligned so five teams do not solve the same problem in five incompatible ways?
  • When one AI capability changes the speed or quality of work in one area, what must change in the next area for the value to actually reach the customer?

Without those answers, AI stays fragmented.

With those answers, the organisation starts behaving like a system.

Why this gets harder in multinational organisations

For organisations operating across multiple countries, the challenge is even more severe.

A use case that works in one market often fails at the coordination layer in another. Not because the local team is weaker, but because the surrounding design is different. Approval paths vary. Regulatory conditions vary. Data structures vary. Escalation routes vary. Resourcing assumptions vary.

A global AI strategy with local pilots is not enough. The coordination model between markets also needs design. Otherwise the organisation scales inconsistency instead of capability.

The companies getting this right are not necessarily the ones with the most sophisticated models. They are the ones that redesigned the organisational conditions around the models before trying to scale them.

The sequence that actually works

The better sequence is more deliberate.

  1. Map how work really flows, not how the org chart suggests it should flow.
  2. Identify where AI can improve an activity chain that eventually shapes customer outcomes, not just internal efficiency.
  3. Design the operating model required to coordinate that improvement across teams, systems, and markets.
  4. Run controlled implementations in a way that tests both the use case and the organisational conditions required to make it stick.

This is not bureaucracy. It is design discipline.

It is also cheaper than running disconnected pilots for two years and wondering why the transformation never materialised.

The real lesson

Most AI programmes do not underperform because organisations lack ideas.

They underperform because companies mistake use case discovery for transformation.

A list of AI opportunities is not a transformation strategy.

A pilot is not an operating model.

And a local productivity gain is not a customer outcome.

If you want AI to move the needle, stop asking only where the technology can be applied.

Start asking where value actually lands, what activity chain produces it, and what organisational design is required for that value to travel from one function to the next until the customer feels it.

Frequently Asked Questions.
Why do AI use cases fail to improve customer experience?

Because most are evaluated and implemented inside functional silos. They improve local efficiency, but customer experience depends on an end-to-end activity chain across multiple teams, systems, and decisions.

What is an enterprise AI operating model?

An enterprise AI operating model defines how AI opportunities are identified, prioritised, built, governed, and coordinated across the business so that local improvements translate into enterprise outcomes.

How does a Digital Twin of the Organisation help with AI transformation?

A Digital Twin of the Organisation makes work visible across functions and geographies. It helps organisations see where delays, handover failures, or design flaws affect customer outcomes and where AI use cases will create the most strategic value.

How should multinational companies approach Enterprise AI operating model design?

They should design not only the use cases, but also the coordination model between markets. A use case that works in one country may break at the approval, regulatory, data, or escalation layer in another.

Does AI adoption always require restructuring?

No. Small-scale AI use does not always require formal restructuring. But meaningful AI adoption at scale usually requires changes to workflows, governance, roles, and team ownership because the existing organization was not designed for human and AI collaboration at enterprise level.

Design your AI operating model before scaling more pilots

If your organisation has AI use cases but limited enterprise impact, the issue may not be the tools. It may be the operating model around them. Versatile helps leadership teams map activity chains, identify where value really lands, and design the coordination model required for hybrid human and AI organisations.

Transforming Portfolio Management

Transforming Portfolio Management

Transform your portfolio management and prioritization strategies with our expert insights. Learn how to align with ROI, balance capacity, and mitigate risks in strategy, economics, capacity, and risks. Don’t miss out on our blog post on versatile.consulting for a comprehensive guide on optimizing your portfolio management.

Developing Agile Coaching skills.

Developing Agile Coaching skills.

Business CoachContinual development of skills is an essential part go professional development of Agile Coaches. But unfortunately, we see a lack of options to know what skills we are good at and what skills to develop further, an assessment, a development tools that...