What Only Humans Can Do: A First-Principles View
Agents keep getting better. They write code, draft specs, answer questions, and orchestrate workflows. The question that surfaces again and again: what’s left for humans?
A first-principles answer: the things that require stakes—real skin in the game—and synthesis across contexts that no single agent has. Strategy, judgment, coherence, and learning. Not because agents can’t do them at all, but because doing them well requires being accountable for outcomes and integrating signals that agents don’t have access to.
Strategy
Strategy is deciding what to build and why. It requires tradeoffs: we can’t do everything, so we choose. Those choices have consequences. Agents can propose options and analyze tradeoffs—but they don’t bear the consequences. Humans do. Strategy is inherently stakeholder-dependent. The human (or org) that will live with the outcome has to own the decision. Agents inform; humans decide.
Judgment
Judgment is knowing when something is good enough, when it’s wrong, and when it’s dangerous. It requires calibration: experience with what “good” looks like, what failure modes exist, and what the stakes are. Agents can produce confidence scores and heuristics—but they don’t have the lived experience of “we shipped something like this and it bit us.” Judgment is learned through consequence. Humans have that feedback loop in a way agents don’t (yet).
Coherence
Coherence is ensuring that the many parts of a system—product, codebase, org—hang together. Agents optimize for local tasks. They don’t see the full system. The human (architect, PM, lead) holds the mental model of how it all fits. Coherence requires integration across domains and over time. That’s a human role—until we have agents that truly “understand” the whole.
Learning
Learning is taking outcomes—what shipped, what worked, what didn’t—and feeding them back into the next cycle. Agents can be retrained on data, but the kind of learning that changes strategy—“we were wrong about the user need,” “we need to pivot”—requires interpretation. Humans connect outcomes to decisions, extract lessons, and update mental models. That loop is narrative and reflective. Agents can assist; they don’t yet own it.
The Implication
The human role in the agent era isn’t to do what agents can’t. It’s to own the things that require stakes and synthesis. That’s a narrower scope than “write all the code”—but it’s higher leverage. The orgs that explicitly design for these roles—strategy, judgment, coherence, learning—will get the most from their agents. The ones that treat humans as “agent assistants” will underutilize both.