Against AI Productivity
The dominant story about AI is productivity and I believe it's such a shallow story.
It assumes the work is correct and the problem is speed. That if we move faster, automate harder, or remove more humans, the system improves. Parts of it are right, but the core thesis seems wrong.
Most systems are slow because their logic is broken.
- Rules are implicit.
- Decisions are improvised.
- Exceptions pile up without reconciliation.
Over time, work stops being about outcomes and becomes about managing contradictions. Slapping AI onto it doesn't fix that, rather aggravates the situation.
The real leverage of AI is not automation, it is exposure. When you force decisions into executable form such as rules, constraints, and tradeoffs, you surface what was previously hidden: conflicts, ambiguity, and responsibility. Things organizations tend to leave vague.
This is why I'm against "AI productivity".
Productivity optimizes execution. AI destabilizes assumptions.
Used properly, a great AI system should slow systems down at the right moments - because we can now afford that luxury with the time it saved on all other areas. It forces questions people avoided.
Why does this rule exist?
Who owns this outcome?
What happens when policies collide?
These questions are hard ones, but also necessary.
Automation without coherence just scales nonsense. It makes broken systems louder, faster, and harder to unwind. That's not progress. Adding 100x more labor power into the same broken system doesn't solve anything and that remains same with AI.
I'm not interested in AI that helps us do more. I'm interested in AI that makes it impossible to hide bad decisions behind process, precedent, or human exhaustion.
The future won't be run by the most productive organizations. It will be run by the ones willing to make their logic explicit and live with the consequences.