April 07, 2026

We don't sell code

A word on AI mandates.

(10 minute read)

There's something fascinating about the idea of an AI mandate. Tech companies are all-in on AI service providers, their rhetoric pushing for adoption is only escalating, and I can't help but take pause and meditate on how...weird...that is.

Problems create tools

The most common refrain I hear about AI-powered development software — Claude, ChatGPT, Copilot and the like — is that they are tools not unlike IDEs, debuggers, Git, and the terminal. The thing about tools, in a more abstract sense, is that they are purpose-built to solve a specific and defined problem. The IDE provides an environment to interact with the written code, debuggers assist in identifying flaws in implementation, Git catalogs and organizes changes, and the terminal is the interface between a human and a computer that allows code to perform work.

Sure, software engineering could look like flashing dense, impenetrable machine code onto a chip; it isn't too far from reality for real software engineers working with embedded systems. The job is prima facie difficult, and so we use our existing tools to fashion new tools to make the more challenging and flaky parts of the job easier and consistent. A more representative software engineering stack — as it pertains to modern notions of web and app development — is layer upon layer of abstraction from the physical silicon. Without needing to understand Assembly or computer architecture or Kirchoff's Laws in order to write software, we've been able to drastically change the landscape of this craft since the general-purpose computer was invented; solving those specific and defined problems gives us the freedom to explore and solve other kinds of problems.

Problems can only be solved if they are specific and defined.

Of software and screws

And so that begs the question: what problem is AI (meaning: agentic coding tools) solving? If the purpose of a system is what it does, then the apparent purpose of AI is to produce code. And as many other writers have already pointed out, anyone that claims that the purpose of a software engineer is to produce code simply doesn't understand what a software engineer does.

AI vendors and supposed tech leaders have this belief that coding is a manufacturing job, and from this misconception they arrive at all kinds of bizarre theories on what code is and how it's made. AI mandates are strong indicators that there is no functional difference, in their minds, between a line of code and, say, a screw. If software is the product, and it is composed of lines of code, then it stands to reason that what the business needs is to create more lines of code to make more product, and therefore improvement must mean writing more lines of code faster.

And business leaders get sold — hook, line, and sinker — on the idea that AI should generate code at scale. They fall for it. Every. Time.

Sure, a screw requires intention; an engineer designed the screw, another engineer created the machine that can process raw materials into screws at a certain throughput, and yet another engineer is tasked with maintaining and servicing the screw-making machine. Each of these roles represents a discrete area of expertise. The business is simple: the ability to make screws is its value, the screw-making machine is the method, and selling screws is the revenue source. It's key to understand that the method is a cost, and the product generates revenue.

Software engineers wear all of those hats. They design the architecture for the software they create, they implement those designs with all sorts of industry testing standards and safeguards, and they also maintain the software they create for long-term service. Even here the business is simple, if not exactly mirrored by the screw vendor: what the software does is its value, the software is its method, and how that software is monetized is the revenue source.

But in the end, a line of code is a craft product, not a manufactured good. Every line bears intention in a part-to-whole way that a particular mass-produced screw lacks; the intention in that screw was already applied when the screw-making machine was designed and implemented. Every line of in-house code must work in tandem with every other line such that, in chorus, the objective of the software is achieved. It's not an exercise in assembly, but in orchestration. A distillation of study and practice, not rote repetition. We build software, we sell the work it can perform, but we do not sell lines of code.

Code is not a specific and defined problem, and so AI can't possibly solve it.

Pursuit of elegance

This is ultimately why the apparent purpose of AI code generation doesn't make much sense. Lines of code are not and have never been correlated to revenue; you could even argue that code is inversely correlated to it, each line is an object that developers must spend valuable time maintaining, and that's entirely aside from the view that every line is a potential point of failure.

Solutions are what generate revenue, always have, always will. There is an intersection point in which the aggregate risk and maintenance of all lines of code in a solution exceeds its expected revenue, and so the job is to design a solution in as few moves as possible. We are controlling for elegance, and the best of us can consistently stay on the correct side of that intersection point. That difference between the value of a solution and the cost of its maintenance is one of the cornerstones of any software-based enterprise.

AI is not built for elegance, it's built for scale. It bloats by design, its default behavior is skewed to be additive. There is a perverse incentive for AI vendors to burn tokens at the expense of their clients; it's their whole business model. They are in their bag when developers are prompting, then reprompting, then reprompting, asking for a one-line code change somewhere in a dense mess of output just to receive a dissertation slathered in "you're absolutely right" and a code change that works but not in that way it needs to work so the cycle can start anew. Swipe card for more tokens.

Computer "science"

AI development products are very very new. Studies are being conducted every day aiming to understand their efficacy, and that body of work includes private research from AI vendors themselves. Put simply: there's no long-term proof that AI coding tools actually create business value. We can certainly try them, but any decision to enact an AI mandate cannot be made based on peer-reviewed, impartial evidence but for the small yet ever so important detail that it doesn't exist yet.

I would like to believe we're an evidence-based industry. Maybe that makes me naive.

The only "evidence" that comes to mind are productivity metrics like commits, reviews, sprint velocity, and other forms of lossy information that business leaders use to make decisions. They are pieces of a broader picture, but in isolation are measures of how quickly code is created, not code quality.

It's the tragedy of the electric company; you only notice them when the power's out. Systems that perform well do so quietly. Maybe we measure them by uptime, days without incident, and so on, but all consideration of the upfront effort required to deliver is defenestrated once AI waltzes into the conversation promising magic. Writing code is easy. Writing good code is daunting. And, believe it or not, the difficulty isn't in the code; it's in the legwork that happens before the IDE is even launched. An engineer that delivers elegant, reliable code will not be treated kindly by productivity metrics; the time spent defining a spec, architecting, planning, coordinating, documenting, all activities that don't take place in the IDE are liable to be punished by conventional productivity measurements.

Meanwhile AI, at a mathematical level, has no concept of what good means, only statistically significant. At a business level, it's in vendors' best interest that this is the case. Generative development tools are likely to spin out 100 lines of code when 10 would do, and so productivity tracking overrepresents their efficacy. Ergo, business leaders love AI. We can't strap a sensor to good design principles, and so business leaders don't acknowledge it nearly as much as they perhaps should.

Any enterprise requires some amount of trust in its specialists to know what the business needs within their area of specialization. Some developers feel that they get a lot of mileage out of AI tools. Others (myself included) don't need it to do the job well and would rather not use it. Experts need the space to be experts, to choose their tools, and an AI mandate signals an emphatic contempt for their expertise.

It's been said before: if I spent 10 years honing my software development skills such that I can deliver your project overnight, you aren't paying for the night, you're paying for the 10 years.

What is AI for?

And so we remain at square one with the question: what problem is AI solving? Stripping away the idea that code is something that ought to be generated en masse, that code is synonymous with business value, there is exactly one thing that AI does better than a developer.

It outputs code faster.

Again, anyone that thinks that a software engineer's job is to output code does not understand software engineering.

AI does not know your business. It does not know the nuances that come with waves of developers grappling with the same problems over the years. It absolutely has no concept of your organization's tech debt. AI cannot run code. AI cannot test. AI does not know what good output looks like. It cannot coordinate releases. It cannot gather requirements. It cannot use your app. It does not have needs like a user, it doesn't think like a user, it doesn't even think. And the more you tell it, the worse it performs.

But since the training data, the Internet, is filled with examples of syntactically-correct, runnable code, AI can reliably meet the bar for code quality at "does it run?" Code is not synonymous with software; code is the means and software is the ends. Even the most airtight, industry-exemplary, unit-tested codebase is a failure if it cannot both perform the specific and defined work the software needs to accomplish and do so with a degree of efficiency and poise.

Then imagine my — and many others' — utter confusion at the prospect of signing a massive B2B contract with an AI vendor and then mandating its usage, complete with corporate tracking, before the technology has had time to proof. Since when was it principled business to pay for something and then set out to prove its value?

If you're a business leader, your role is to consume lossy information to make big-picture decisions, and no one knows your business in totality — not even you. It's why you have departments staffed with specialists. Division of labor makes your business possible and somehow AI vendors get away with preaching that it's the specialists that are holding your business back because they're too slow. They peddle the idea that code can and should be made on a factory line even though we do not sell code; we sell what it does, the work it performs.

So what problem is AI solving? AI mandates make clear the position of tech leadership: people are the problem.

Tags: technology software-engineering career

Previously

BLOCKS: a two-day art project