Agentic AI Makes Bad Process Worse
AI doesn't fix broken workflows. It amplifies them. Here's what happens when teams bolt agentic systems onto process with no guardrails.
A recent study found that developers using AI tools were 19% slower than their non-AI counterparts. Let that sink in for a moment. Access to what should be the most productivity-enhancing technology of our generation made people measurably worse at their jobs.
The reaction from the AI community was predictable. They blamed the tools, blamed the study design, blamed the developers for not knowing how to use AI properly. But I think that study revealed something more important: AI doesn't make bad process good. It makes bad process catastrophically worse.
The Setup Tax Nobody Measures
I wrote a book on AI implementation. I started using these tools in mid-2023 and introduced them to my team in early 2024. I consider AI enablement part of my core role. So when I ran an internal ranking exercise and found my team averaged 2.87 out of 5 for AI capability, I was genuinely shocked.
How could a team led by someone who literally wrote the book on this still be so far behind?
The answer became obvious when I started sitting down individually with people. Yesterday, I spent over two hours with one team member working on a single Jira story. The work itself took five minutes. The system setup took ninety minutes.
That's not a tool problem. That's a process problem. And if you're building multi-agent systems on top of teams with no systematic approach to AI, you're not automating work. You're industrializing dysfunction.
Why Agentic Systems Amplify Process Failures
Agentic AI is fundamentally about delegation. You're creating specialized agents—planners, researchers, coders, evaluators, guardrails—and orchestrating them to complete complex workflows. The theory is elegant. Many specialized agents working together should outperform one general-purpose model trying to do everything.
But here's what happens in practice. If your team doesn't have clear handoffs, defined quality gates, or systematic evaluation when humans do the work, adding agents just creates five different ways to fail instead of one.
No clear process for code review? Your code-generation agent now ships bugs faster. No defined acceptance criteria? Your planning agent optimizes for the wrong outcomes at scale. No evaluation loop? Your agentic system confidently produces garbage with incredible efficiency.
The uncomfortable truth:
If your workflow is broken when humans run it, it will be broken at 10x speed when agents run it.
This is why the 19% productivity drop makes perfect sense. Teams bolted AI onto workflows with no quality gates, no systematic onboarding, and no clear standards. The AI didn't make them slower. The lack of process did.
What Systematic Enablement Actually Looks Like
After seeing that 2.87 average, I changed my approach. I'm now running weekly applied AI classes internally and scheduling one-on-one sessions with everyone scoring a 3 or below. The goal isn't to make everyone an AI expert. It's to establish simple, repeatable standards that work regardless of individual enthusiasm.
The person I worked with yesterday went from a 2 to a 3 in those two hours. Not because I taught them prompt engineering or agentic architecture. Because we got their system set up correctly. Now they can replicate those results on future stories—assuming they follow the process.
That last part is critical. Tools without process create one-off wins. Process creates compounding returns.
Here's what systematic enablement requires:
- Standardized tooling setup: Everyone starts from the same baseline configuration, not custom setups that only work on one machine.
- Clear quality gates: Define what "good enough" looks like before AI generates anything, not after.
- Documented workflows: Write down the steps. If you can't document it, you can't automate it.
- Regular check-ins: Progress compounds when people follow process consistently, not when they use AI once and forget about it.
- Realistic expectations: Not everyone on your team is an AI fanboy. Design enablement for the skeptics, not the early adopters.
This isn't exciting work. It's not about building the most sophisticated multi-agent architecture or chasing the latest model release. It's about making sure your team has the boring infrastructure that lets AI actually improve their work instead of creating new bottlenecks.
The Compounding Cost of Poor Process
When I see teams building agentic systems, I ask a simple question: what happens when one agent in your chain produces bad output? Do you catch it immediately? Does the system degrade gracefully? Or does every downstream agent compound the error until someone manually notices the whole pipeline failed?
Most teams don't have good answers. They're optimizing for the happy path—when everything works. But production isn't the happy path. Production is edge cases, partial failures, and agents confidently hallucinating because nobody built evaluation into the workflow.
This is where bad process becomes catastrophic. A human doing sloppy work might produce one mistake. An agentic system running on sloppy process produces hundreds of mistakes before anyone notices, because the system scaled the dysfunction.
The real question isn't whether your agents can do the work.
It's whether your process can catch when they do it wrong.
If the answer is "we'll manually review everything," you haven't automated anything. You've just added an expensive preprocessing step before humans do the real work anyway.
Failure Modes You Need to Plan For
Here are the process failures I see most often when teams scale agentic systems without fixing the underlying workflow:
- No evaluation framework: Teams assume agents work because the output looks reasonable, not because they measured quality against known-good examples.
- No rollback strategy: When an agent produces bad output that makes it to production, there's no systematic way to identify what broke or revert to a working state.
- No cost controls: Agent workflows run without budget caps, and teams don't realize they're burning money until the monthly bill arrives.
- No clear ownership: When something breaks, nobody knows which agent failed or who's responsible for fixing it.
- No gradual degradation: The system assumes all agents work perfectly, so when one fails, the entire pipeline halts instead of gracefully degrading to simpler fallbacks.
Every one of these failure modes comes from deploying AI onto broken process. If you don't have evaluation, rollback, cost controls, ownership, and degradation patterns for human-driven workflows, you won't magically get them when agents take over.
Implementation Checklist: Fix Process Before Scaling Agents
If you're building agentic systems or scaling AI across a team, here's what needs to be in place before you add more agents:
- Document your current workflow: Write down each step humans take. If you can't document it, you can't automate it reliably.
- Define quality gates: What does "good output" look like? Write explicit acceptance criteria before AI generates anything.
- Build evaluation into every handoff: Each time one agent passes work to another, validate the output. Don't wait until the end of the pipeline.
- Standardize tooling setup: Everyone on the team should start from the same baseline. Custom configurations create hidden dependencies.
- Establish cost budgets: Set per-run and per-agent cost caps. Know when you're spending more than the value you're creating.
- Create rollback procedures: Before you scale, know how to identify failures and revert to the last working state.
- Schedule regular process reviews: AI capabilities change fast. Your process should evolve with them, not ossify around outdated patterns.
- Design for the median user, not the expert: If your system only works for people who deeply understand AI, it won't scale across your team.
None of this is glamorous. But it's the difference between AI that makes your team faster and AI that makes your team 19% slower because nobody can figure out which agent broke the pipeline.
Why This Matters Now
Teams are rushing to deploy agentic systems because the narrative says multi-agent architectures are the future. They're not wrong. But they're skipping the boring prerequisite work.
The teams that will succeed with agentic AI aren't the ones with the most sophisticated architectures. They're the ones with the most disciplined process. The ones who fixed handoffs, quality gates, and evaluation loops before they added agents to run them at scale.
If you're a data scientist like me, you might have a fascination—maybe an addiction—to ML and AI. Your team probably doesn't. Most people are "meh" about AI. They'll use it if it makes their job easier and ignore it if it doesn't. Your job as an enabler isn't to convert them into AI enthusiasts. It's to build process that works whether they're excited about it or not.
That's the only way this scales.
For more on building reliable agentic systems, see the blog archive or explore implementation resources on orchestration patterns, evaluation frameworks, and failure-mode planning.
If your team spends 90 minutes setting up AI to do 5 minutes of work, what does that tell you about your process?