Sharat Visweswara
6 min read
  • AI
  • Economics
  • Long Read

Economics of AI

Is AI actually making intelligence “cheap” — or just making it look cheap for a while?

The popular story about AI is that it will make intelligence “cheap” in the same way electricity made power cheap. That story feels right because the software is frictionless and the demos are dazzling: a prompt goes in, coherent output comes out, and it looks like thought has been commoditized. But underneath the interface, intelligence remains a physical process — and physical processes have costs. When those costs are hidden by introductory pricing, hype cycles, or strategic subsidies, we mistake a temporary market condition for a permanent law of nature. The next phase of AI adoption will be defined by a far more grounded question: what is the real cost of producing a unit of useful intelligence, and who pays it?

The energy math

Human brain versus silicon chip
Photo via Unsplash

Start with the most jarring comparison. A human brain runs on roughly 20 watts. A modern data-center accelerator can draw hundreds of watts — an H100-class GPU is on the order of 700 watts under load. Even before you argue about throughput, latency, or precision, the raw energy budget tells you something uncomfortable: silicon is not intrinsically a more efficient substrate for cognition than biology. On some estimates, synaptic operations per watt in the brain beat silicon by orders of magnitude. You can argue about what counts as an “operation,” or how much of the brain’s activity is “useful,” but even after penalizing humans for biological overhead, the gap still lands at one to two orders of magnitude. If intelligence is a commodity, the cheapest factory on Earth is still a person with a sandwich.

This is not a romantic claim about human uniqueness; it’s an economic claim about energy, capital, and utilization. Humans come with built-in power supplies, built-in cooling, and astonishingly high duty cycles. GPUs require capital investment, data-center infrastructure, electricity contracts, cooling systems, orchestration layers, and an operating organization that keeps the whole stack running. When we price inference at fractions of a cent, we are not revealing the true economics of machine intelligence. We are, in effect, running an extended experiment: “What if intelligence were subsidized?” And like all subsidies, it distorts choices.

Tool or machine

Machinist with Lathe
Photo via Unsplash

That distortion matters because AI can be deployed in two fundamentally different ways, and these two deployments obey different economic logic.

One deployment treats AI as a tool. A tool amplifies a skilled operator. A lathe in the hands of a machinist increases output, precision, and consistency — but the machinist remains the unit of accountability. The tool is subordinate to human judgment. In modern knowledge work, this is the role of a writing assistant, a code reviewer, a research aide, a translator, a debugger, a summarizer: systems that accelerate the “motions” of cognition without removing the human from the loop.

The other deployment treats AI as a machine. A machine is not merely a tool; it is a process that runs unattended, producing output as an industrial unit. An automated lathe operating in a lights-out factory is not “helping a machinist”; it is replacing the need for a machinist as the bottleneck in that specific production step. In AI terms, this is the autonomous agent that runs an end-to-end pipeline with minimal supervision: it ingests tickets, drafts responses, pushes commits, executes a playbook, or processes paperwork at scale.

The difference is not technical; it’s organizational and economic. The same model can be deployed as a tool or as a machine. What changes is the surrounding system: the tolerance for error, the cost of supervision, the architecture of control, and — most importantly — the pricing of inference.

The Industrial Revolution, precisely

Man with Machine
Photo via Unsplash

Here’s where the Industrial Revolution becomes the right analogy, but only if we’re precise about what it actually did. The Industrial Revolution didn’t “replace labor” in the abstract. It replaced specific human motions as the economic unit. Hand-weaving was not eliminated because people stopped wanting textiles; it was eliminated because the loom turned weaving into a mechanical process that could be capitalized, scaled, and repeated. Yet the era produced both replacement and augmentation. Some technologies collapsed crafts into factories; others created new crafts around designing, operating, and maintaining the machines. A society can experience both dynamics at once, in different sectors and at different layers of the value chain.

AI is poised to do the same. But the distribution of outcomes depends on whether AI primarily shows up as tools in the hands of many, or machines owned by a few. That, in turn, depends on what it costs to run the systems, and how those costs are accounted for inside organizations.

What subsidized pricing hides

Subsidized inference pricing has temporarily made “machine” deployments look universally rational. When the marginal cost of an additional call is near zero, you naturally ask: why not automate everything? Why not replace the human bottleneck? But in real operational environments, replacement is rarely “free.” It requires reliability engineering, monitoring, incident response, compliance, data governance, and a strategy for handling edge cases. Humans are expensive in wages; machines are expensive in systems.

Subsidies mask that systems cost. They also mask the energy cost, the amortized hardware cost, and the organizational cost of integrating AI into production. As a result, many teams have optimized for replacement not because replacement is always superior, but because pricing temporarily made it appear so. This is the subtle trap: you can build an organization around a distorted cost curve and then find yourself trapped when the curve snaps back.

When real costs arrive

Pile of bills
Photo via Unsplash

As subsidies end and real costs arrive, a different calculus emerges. It may be economically superior to augment a 20-watt knowledge worker with targeted AI than to wholesale replace them with an always-on agent pipeline. Tools can be invoked precisely where they create leverage: drafting a first pass, exploring alternatives, checking consistency, doing rote transformations, scanning documents, or compressing information. The human remains the integrator — the system that handles ambiguity, prioritization, and accountability — while the AI does the high-throughput, low-judgment work.

In this world, the key question becomes: Where is the boundary between judgment and motion? Where do errors become expensive? Where does supervision cost dominate inference cost? Where does the value of context and taste exceed the value of raw throughput? The answers will differ by industry, by team, and by task — which means there will not be one “AI outcome.” There will be many, and they will coexist.

Pricing is governance

This is why pricing is not a footnote. Pricing is governance. Pricing determines whether an organization can afford to run intelligence as a background utility, or whether it must treat AI calls like any other scarce resource. It determines whether managers optimize for full automation or for selective leverage. And it determines whether the productivity gains of AI spread broadly through a workforce (tools) or concentrate at the ownership layer (machines).

The patchwork future

The most realistic future is not “AI replaces everyone,” nor “AI helps everyone.” It is the Industrial Revolution pattern: a patchwork. In some domains, AI will be deployed as machines, and value will concentrate where capital and operational excellence sit. In other domains, AI will be deployed as tools, and value will distribute across practitioners who can wield it with skill. The same organization may run both modes simultaneously — a few automated pipelines in the back office, and a wide layer of augmented professionals at the front.

So the economics of AI is, ultimately, the economics of choice. The technology makes many deployments possible; the price makes some of them rational. As real costs replace subsidized ones, the distinction between AI-as-tool and AI-as-machine will stop being a philosophical debate and become a budgeting line item. And in that shift, organizations will be forced into honesty: about what they are trying to automate, what they are trying to amplify, and who they believe should capture the gains.

Patchwork
Photo via Unsplash