Estimated reading time: 5 minutes
The problem isn’t that AI gets things wrong. It’s that it removes the signal that something might be wrong.
I read a piece last week that didn’t start with capability. It started with a question I’ve been circling for longer than I’d like to admit: what is AI doing when it looks like it’s helping?
Most conversations skip that step. They move straight to output: speed, accuracy, automation. This one paused just before that, and drew a line I haven’t been able to unsee.
Some problems are prediction problems. Others are ambiguity problems. And that distinction, it turns out, is the mechanism I’ve been trying to name.
Prediction Is Real. So Are Its Edges.
Let me be honest about what prediction is good at. It catches tumors earlier than trained eyes. It translates across more than a hundred languages. It surfaces the song you didn’t know you needed. These gains are genuine. I won’t pretend otherwise.
But prediction has conditions. It works when the future resembles the past, when there’s enough quality data, and when the problem has a right answer that exists somewhere. When those conditions break down, so does the prediction.
Strategy rarely meets those conditions. Neither does leadership. Creative work, organizational change, the decision that matters; these operate in shifting terrain. Partial information. Conflicting signals. Questions that change shape as you approach them. That’s ambiguity. And ambiguity is not prediction at a smaller scale. It is a different class of problem entirely.

What The Digestion Gap Was Always Pointing At
When I first wrote about The Digestion Gap, the widening distance between how fast organizations consume information and how slowly they absorb it into actual judgment, I was trying to name something I could feel before I could fully articulate.
This is what I was circling: AI expands prediction capacity at an extraordinary rate. But the conditions under which prediction works are narrower than they appear. And the problems that matter most to any organization, the strategic ones, the leadership ones, the ones where being wrong costs something real, almost never meet those conditions.
Ambiguity doesn’t disappear when prediction improves. It becomes more visible, because the tools that were supposed to resolve it keep returning confident answers to questions that don’t have confident answers.
The Smooth Surface
Here is the part that has stayed with me.
The piece I read made an observation that sounds almost obvious until you follow it to where it leads. AI speaks with certainty even when it’s guessing. Not because it’s deceptive. Because it has no internal experience of doubt.
Doubt is not a weakness in human reasoning. It is a signal. It tells you where the edges of your understanding are. It slows you down just enough to reconsider. It creates the friction that forces judgment to engage.
When a system removes the felt experience of uncertainty while the underlying ambiguity remains, something structural shifts in how decisions get made. The question still requires interpretation. But it now arrives wearing the clothing of certainty.
The surface becomes smoother. The terrain underneath does not change.
Organizations that have started optimizing for AI-generated clarity are not moving faster through complexity. They’re moving faster through the appearance of simplicity. The map has fewer contour lines, not because the territory has flattened, but because the mapmaking tool doesn’t know how to represent uncertainty.
The Four Proofs, Read Differently Now
Last week I wrote about The Proof of Work Problem: AI has commoditized the outputs of expertise without commoditizing the expertise itself. The four new proofs of work, judgment in context, curation under constraint, translation across domains, presence in the decision, are how genuine expertise stays visible in an era of cheap output.
Through this lens, they read differently.
Judgment in context is the act of navigating what prediction cannot resolve. Curation is the discipline of selecting from an overproduction of plausible answers. Translation is the bridge between generated outputs and the lived reality they’re supposed to serve. And presence is where ambiguity gets reduced, not passed off.
These aren’t residual tasks. They’re the core of the work.

The Capacity We Can’t Afford to Lose
There was an example in the piece that stuck with me, a Fermi estimation exercise. An unanswerable question, deliberately. The point wasn’t to get the number right. It was to break the problem into parts, surface the assumptions, and understand where the uncertainty lived.
An AI will give you a number immediately. A human, working through it, builds a chain of reasoning.
The difference isn’t accuracy. It’s understanding.
Understanding doesn’t scale the way prediction does. It requires time, friction, and a willingness to sit with incomplete information. If we build systems that consistently remove that friction, and reward people for using those systems , we gradually erode the capacity to generate genuine understanding, not just faster answers.
The future that concerns me isn’t AI getting things wrong.
It’s AI getting things smooth.
This continues the thread from The Proof of Work Problem, where we explored how AI commoditized the evidence of expertise without touching the expertise itself. The Smooth Surface names the mechanism underneath: the process by which that erosion becomes invisible, even to the people it’s happening to.
Forward this to: The VP or Director who has rolled out AI tools across their team, reported a capability upgrade to leadership, and hasn’t yet asked what judgment capacity may have been quietly traded in the process.
A Question for You: In the last month, has AI ever returned an answer that felt complete, and you accepted it, even though something in the problem hadn’t quite resolved? What did you do with that feeling?
Madam I’m Adam
Discover more from Adam Monago
Subscribe to get the latest posts sent to your email.
Leave a Reply