
SAKET BIVALKAR
Saket Bivalkar is the Managing Partner of Versatile Consulting, a boutique consultancy that designs operating models for multinationals. His work focuses on building models that hold across geographies while allowing for local adaptation. Recent engagements include 63-country and 42-country operating model transformations in regulated industries.
He is based in Spain and can be reached at saket@versatile.consulting.
If you automate a bad workflow and cannot troubleshoot it, you were never serious
A lot of organisations are talking confidently about AI and automation.
Many of them are not ready for either.
Not because the tools are immature. Not because the vendors are weak. Not because the opportunity is unclear.
They are not ready because they keep making two basic mistakes.
First, they try to automate workflows they should have redesigned before automation was ever considered.
Second, when the automation fails under real operating conditions, they do not have the troubleshooting discipline to recover intelligently.
That combination is lethal.
It creates a familiar pattern. A company automates something messy, the workflow behaves badly, nobody can diagnose the failure properly, and the team quietly crawls back to manual work while still telling itself it is on a transformation journey.
That is not transformation. That is theatre with software.
A simple CSV upload tells the story
Recently, I worked with a client on something that should have been simple: automating a CSV upload.
The upload started failing. No meaningful error appeared. The user tried using generative AI to find an answer and got nowhere. The team eventually reverted to the manual process.
Most organisations would treat this as a minor technical issue. It is not.
It is diagnostic.
Because this kind of moment tells you two things very quickly.
First, whether the workflow was properly thought through before it was automated.
Second, whether the organisation still has people who can troubleshoot when the process stops behaving as expected.
When I got involved, the work did not start with a clever prompt or a desperate search for a workaround. It started where serious problem solving always starts: understanding the requirement properly. What was supposed to happen? What conditions had to be true for success? What had changed? What did the process depend on? From there, we traced likely causes, including CSV formatting issues and the surrounding process logic.
That sequence matters.
Because the real issue was not just a failed upload. The real issue was that the automation had hit reality, and the organisation had no disciplined way to respond.
Too many companies automate before they think
This is one of the dirtiest secrets in the automation conversation.
A large amount of automation work is being applied to workflows that are already badly designed.
The data is inconsistent.
The ownership is unclear.
The handoffs are clumsy.
The exceptions are ignored.
The approval logic is historical rather than necessary.
The process only “works” because human beings keep compensating for its weaknesses.
Then someone decides to automate it.
Why?
Because automation feels like progress. Workflow redesign feels slower, messier, and more political.
Redesign forces organisations to confront uncomfortable questions. Why does this step exist? Who owns the decision? Which rules are real and which ones are habits? What should be removed before anything is digitised? Where is human judgment essential? Where are we standardising nonsense instead of improving flow?
That is hard work. Many organisations would rather buy a tool than answer those questions.
So they digitise the mess instead.
Then they act surprised when the automation is brittle.
A bad workflow does not become intelligent because software touches it
This should be obvious, but apparently it is not.
Automation is not a cleansing ritual for bad process design.
If a workflow is structurally poor, automation usually makes one of three things happen:
It speeds up the wrong activity.
It hides the weakness until something breaks.
It creates a more complicated failure than the original manual process.
That is why redesign has to come first.
Before automation is even on the table, leaders should be asking whether the workflow deserves to survive in its current form. Sometimes the right answer is not automation. Sometimes the right answer is elimination, simplification, reassignment, or a clearer decision path.
If you skip that work, you are not implementing AI or automation seriously. You are preserving legacy dysfunction in a shinier format.
But redesign alone is not enough
This is the part that gets missed by process purists.
Even a redesigned workflow still needs troubleshooting capability.
Because live operations are never clean for long.
Formats drift.
Inputs vary.
Users behave differently.
Dependencies fail.
Edge cases appear.
Business conditions change.
An integration that looked stable in a test environment starts failing under real pressure.
That is normal.
The question is what the organisation does next.
Weak organisations panic, guess, blame the tool, or revert immediately to manual work.
Stronger organisations investigate.
They understand the requirement.
They inspect the input.
They test assumptions.
They trace where the failure actually occurs.
They distinguish between symptom and cause.
They fix the system instead of abandoning it.
That is troubleshooting.
And in the age of AI and automation, it is no longer a secondary support skill. It is a core operating capability.
The real dividing line is not tool adoption. It is diagnostic maturity
This is where many leaders are badly misleading themselves.
They think the dividing line is between organisations that are adopting AI and organisations that are not.
It is not.
The real dividing line is between organisations that can diagnose and redesign work properly, and organisations that cannot.
A company that buys automation tools without redesign capability will automate badly.
A company that redesigns workflows but cannot troubleshoot live failures will not sustain the improvement.
A company that can do both will compound value over time.
That is the difference between performative modernisation and genuine operational maturity.
Generative AI does not rescue weak thinking
The CSV case also exposes another modern delusion.
The first move was to ask generative AI for the answer.
That is understandable. It is now a default behaviour. But it also reveals the problem.
Generative AI can help with troubleshooting. It cannot replace the discipline of troubleshooting.
If the requirement is poorly understood, the process logic is unclear, the context is incomplete, or the real failure condition has not been isolated, the model will often produce something that sounds useful without being useful. It can accelerate search. It cannot substitute for diagnosis.
This is why so many organisations are about to get disappointed.
They are hoping AI will compensate for shallow operational thinking. It will not. In many cases, it will simply make shallow thinking move faster and sound more convincing.
That is not capability. That is polished confusion.
What serious organisations do differently
Serious organisations treat automation as a design and diagnosis problem.
Before automation, they redesign the workflow.
They strip out waste.
They clarify ownership.
They challenge legacy approvals.
They define where judgment is needed and where standardisation is appropriate.
They improve the flow before software is introduced.
After automation, they retain troubleshooting discipline.
They do not assume the process will run perfectly forever.
They do not treat every failure as a reason to retreat.
They build the capability to understand what broke, why it broke, and how to improve the system without drama.
That is what maturity looks like.
Not louder AI language.
Not more pilots.
Not prettier dashboards.
Better workflow thinking before deployment. Better troubleshooting after deployment.
What leaders should be asking instead
Most leaders are still asking:
What can we automate?
That is too shallow.
They should be asking two harder questions:
Should this workflow even exist in its current form?
And then:
When this breaks under real conditions, do we have people who can work out why?
If the answer to the first question is no, redesign comes first.
If the answer to the second question is no, the automation will not hold.
That is the standard.
Practical takeaway
If you want to know whether your organisation is actually ready for AI and automation, test it on both dimensions.
First, redesign discipline:
- Can you challenge and simplify the workflow before digitising it?
- Can you identify unnecessary steps, bad handoffs, unclear ownership, and legacy logic?
Second, troubleshooting discipline:
- When the automated process fails, can your team diagnose the issue properly?
- Can they inspect data, formatting, logic, dependencies, and exception paths without guessing?
- Can they improve the system instead of defaulting back to manual work?
If you cannot do both, your problem is not a lack of technology.
Your problem is that the organisation is trying to automate work it does not properly understand and cannot properly sustain.
Conclusion
A failed CSV upload is not a small story.
It is a brutally useful one.
It shows why so many automation efforts disappoint. Companies automate workflows that should have been redesigned, then discover they no longer have the troubleshooting capability to recover when the automation meets reality.
That is why this matters.
In the AI era, the winners will not be the companies that buy tools fastest. They will be the ones disciplined enough to redesign work before automating it, and competent enough to troubleshoot when the system breaks under real conditions.
Everyone else will keep talking about transformation while quietly falling back to manual work.
If you want, I can now turn this into a polished final website version with tighter subheads, a stronger opening paragraph, and a CTA linked to operating model redesign.
Developing Agile Coaching skills.
Business CoachContinual development of skills is an essential part go professional development of Agile Coaches. But unfortunately, we see a lack of options to know what skills we are good at and what skills to develop further, an assessment, a development tools that...
My experience on Scrum simulation with LEGO
During last week I was running the Agile coaching sessions for 3 teams from one of my clients. For these sessions we did the Simulation of Scrum with Lego which turned out to be very interesting activity for the teams.

