Essay
Software Development Has an Endgame
Since GPT appears, every 1.5 years, software development changes its interface.
Not its essence, at first glance. Just its interface.
You can trace the sequence pretty cleanly. First came the editor: a blank file, a keyboard, and a human being translating thought into syntax by hand. Then came smarter IDEs and completion systems, which reduced the friction but preserved the basic model. Then came AI copilots, which stopped merely checking code and started suggesting it. Now we are in the age of agents: systems that can read a codebase, interpret a task, modify multiple files, run tests, inspect failures, and try again.
Most people still talk about this as if it were an incremental tool shift. Better autocomplete. Better IDEs. Better assistants.
I think that is the wrong frame.
What we are seeing is not the next productivity tool. We are watching software development move toward its endgame: the point where humans are no longer primarily responsible for producing code, and are instead responsible for defining what code should do, what constraints it must obey, and how to tell whether it is good.
That does not mean programmers disappear. It means programming, in the sense we inherited it, stops being the center of the job.
For a long time, software development was organized around a strange assumption: that the most valuable use of human intelligence was to sit at the narrowest part of the funnel and manually spell out implementation details. We got so used to this arrangement that we mistook it for something fundamental. It was never fundamental. It was just a consequence of what machines could not yet do.
In the editor era, the machine had almost no initiative. It would compile what you wrote, execute what you specified, and fail exactly where you told it to fail. This gave developers enormous control, but it also forced them to spend an absurd amount of cognitive effort on low-level transcription. Much of "software engineering" was not really engineering at all. It was manual encoding. Humans were acting as biological compilers for ideas that already existed in their heads.
Then completion systems arrived and started eating away at that waste. The machine got good at predicting the next token, then the next line, then the next block. Boilerplate became cheap. Syntax errors became less frequent. Common patterns became almost preloaded. This mattered, but less than people thought. Completion made coding faster, but it did not challenge the underlying structure of the work. The human still had to decompose the task, maintain the context, drive the sequence, and own the result. The machine was an amplifier, not an actor.
Agents are different because they break that structure. Once a system can take a goal instead of a keystroke, the center of gravity moves. The relevant question stops being "can it help me write this?" and becomes "can it carry this forward without me?" That is not a cosmetic change. That is a change in the unit of labor.
For decades, the unit of software work was the line of code, then the function, then maybe the pull request. Now the unit is becoming the task itself. Implement this endpoint. Migrate this module. Add this capability. Fix this failure mode. Refactor this subsystem to match these architectural constraints. At that point, the human is no longer operating as the direct producer of code. The human becomes something closer to a director, or a specifier, or a reviewer with unusually high bandwidth.
And once that shift happens, the rest follows almost mechanically.
If a machine can complete part of a task, it will eventually complete most of one. If it can complete most of one, it will eventually run entire development loops: implement, test, revise, compare, optimize. If it can do that inside a sufficiently well-designed environment, then the bottleneck in software development is no longer writing code. The bottleneck becomes deciding what should be written, what tradeoffs matter, and what standards distinguish a good solution from a merely working one.
That is why I think "endgame" is the right word.
Not because development stops changing. It will keep changing. Not because code stops mattering. It will still matter. But because the historical role of the human programmer as the primary generator of code is nearing its logical endpoint. Once code generation becomes cheap, abundant, and iterative, there is no stable reason for manual code production to remain the default. It survives, but as an exception: for debugging, for edge cases, for craft, for unusually sensitive systems, for people who simply like doing it. Important exceptions, maybe even prestigious ones. But exceptions all the same.
This has happened before in every serious abstraction shift. People do not go back to hand-allocating memory unless they have to. They do not go back to writing machine code unless there is a very specific reason. They do not provision modern infrastructure by manually clicking servers into existence because they enjoy suffering. Once a higher-level control surface becomes viable, labor moves upward. The old layer does not vanish, but it loses its claim on everyday work.
The same thing is happening to coding itself.
The sentimental objection is that writing code is where the real thinking happens. Sometimes that is true. Often it is not. Often writing code is where thought gets serialized into a rigid medium because no better interface exists. The act of typing has been mistaken for the act of reasoning because, for a long time, they were tightly coupled. AI is beginning to pry them apart. You can now think at the level of systems, requirements, invariants, constraints, architecture, and product behavior while delegating more of the literal production process. As that delegation improves, the value of being the fastest typist in the room declines sharply.
That decline is not temporary. It is structural.
The uncomfortable implication is that much of the industry still organizes talent, status, and identity around the wrong layer. We celebrate implementation fluency because that was historically scarce. But scarcity is moving. The thing that will become rare is not the ability to produce code. Machines will produce oceans of it. The rare thing will be the ability to define what ought to exist, to encode judgment into tools and processes, and to distinguish elegant systems from merely functional sludge.
This is where the future starts to look less like "AI replacing engineers" and more like a brutal sorting mechanism among engineers. The people who were mainly valuable because they could personally push code through the narrow pipe will lose leverage. The people who can articulate intent clearly, design robust constraints, build feedback loops, and recognize quality will gain it. The machine does not erase engineering judgment. It makes it the only part that matters.
And this is also why so many teams will misunderstand what is happening to them.
They will think the hard part is getting access to stronger models. It is not. Model quality matters, of course, but it is not the decisive variable. The decisive variable is whether your organization can make its standards legible. If your architecture exists only as tribal knowledge, the agent will violate it. If your tests are shallow, the agent will optimize for shallow correctness. If your conventions live in the heads of a few senior engineers, the machine will never reliably reproduce them. The problem will not be that the AI is dumb. The problem will be that your engineering culture was never encoded in a form a machine could use.
For years, teams treated documentation, test coverage, and clear architectural constraints as nice-to-haves: signs of maturity, maybe, but not existential. Agentic development changes the economics of that laziness. A messy codebase used to slow humans down. Now it poisons the automation layer itself. Ambiguity is no longer just an inconvenience. It is an amplifier of bad output. If you cannot state what good looks like in a machine-readable way, you are building a factory that can only mass-produce mistakes.
That, more than the models themselves, is the real discontinuity.
Software engineering is being transformed from an artisanal production activity into a control problem.
That sentence sounds abstract, but it is concrete. The highest-leverage engineer in the near future will not be the one who can out-code the machine on demand. It will be the one who can build the environment in which the machine reliably produces the right code. The work shifts from direct construction to steering: defining goals, exposing constraints, shaping feedback, catching failure modes, refining evaluation, and adjusting the system until the outputs become trustworthy.
Once that becomes true, the center of the profession changes with it.
There will still be people writing code manually, just as there are still people who write assembly, tune kernels, or handcraft SQL that an ORM would mangle. But the mainstream of the field is moving elsewhere. The default developer workflow is converging on something new: specify what you want, let machines generate candidate implementations, let automated systems test and compare them, intervene where judgment is needed, and keep refining the loop.
In that world, code is no longer the primary artifact of human effort. It is an intermediate product emitted by a larger system of intent and verification.
That is the endgame.
Not the end of software development, but the end of its long fixation on manual code production as the core act of engineering.
For most of the history of the field, we had no choice. If you wanted software, a human had to sit down and write it. We built an entire culture, career ladder, and mythology around that constraint. Now the constraint is weakening. As it weakens, so does the worldview built on top of it.
The future developer is not the person closest to the keyboard.
It is the person who can tell the machine what matters.
Continue Reading
More writing
March 12, 2026·8 min read