We Didn't Have to Build AI Like This
On an alternative world where AI is pro-human
Imagine a world fifty years into the future. You wake up and the light streams in. Outside your window, birds. Fresh air, it’s just rained. You stretch. Your computer floats: when you look at the screen, you can see the trees out of the window behind it. All that’s on the screen is a small text box. No email, no text messages, no grotesque Microsoft programs. You have been writing poems, and your words are kept in ink. Now, in the delicate morning, you want to understand what’s changed: to compare drafts of your latest poem to each other, understanding the iterations.
You type into the text box: “Show me the differences between drafts 1, 2, 3, and 4.” The screen flickers, then obeys. Your poem shows up in a grid of fours, the changes across each highlighted. It is a schematic demonstrating your evolution.
There are no wording suggestions, no spellcheck, no Grammarly. If you asked the text box to write for you, it would refuse. All it can do is carry out things you would be capable of doing yourself, so that curating your voice becomes the centerpiece of your existence. The machine does not want to replace you. It wouldn’t know how.
There is a theory known as technological determinism, which posits that technology is an autonomous force that shapes the society around it. If seen through this lens, each new invention is the harbinger of a new society, grittily constructed from the ashes of the old. Take the invention of the wheel, which enabled humans to travel longer distances: under technological determinism, one would say this turned nomadic societies into empires, whose borders could suddenly stretch beyond the perceivable world. Silicon Valley wants you to believe this about their technologies: Software is eating the world, said Marc Andreessen. AI could change the world, but first it is changing Silicon Valley, blared the New York Times. Notice the directionality of this change: the technology is the subject of the sentence, and we (“the world”) are merely the object being acted upon.
But this is reductionist and, therefore, polemical. There is a competing theory known as “social construction of technology,” which suggests the opposite: it is the society that influences the development and use of technology. We create, not neutrally, but in service of our existing ambitions. Here, the wheel was invented as the solution to a direct problem. To drag heavy stones, perhaps. And the logic of conquest, which became empire, was already extant in the nomadic societies, which dominated where and how they could. The wheel may have expanded the radius. It did not change the impulse.
It is in this vein that I have come to understand the gravity behind the development of artificial intelligence. It is our societal logic, particularly that of efficiency under capitalism, that has been the driver for how artificial intelligence is developed, utilized, exploited. And, most importantly, it didn’t have to be like this. I believe we could imagine a world where AI did not exist alongside capitalist logics—and it is even remarkably easy to do so.
Let us begin with the question of artificial intelligence. The term was coined at a 1956 summer conference at Dartmouth, examining “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (emphasis mine). The word simulate here, to me, is not incidental. It represents the fact that AI’s original conception did not augur its replacement for human cognition. A simulation, like in the Matrix, is understood to be inferior to the real thing. It can approximate, never abolish.
But artificial intelligence, as conceived since the release of ChatGPT, has long lost the thread of simulation. We do not talk about workers being simulated, or represented. We instead talk about replacement. AI has replaced work for 20% of full-time employees in the U.S., survey says. Jack Dorsey fired half his workforce and claimed it was because of AI. Goldman Sachs thinks 300 million jobs could be automated by AI, and the rest see extreme productivity gains. Sequoia asked about Gen AI’s $600B question: where is the revenue from all this flashy AI investment?—but soothed themselves by saying “a huge amount of economic value is going to be created by AI.” McKinsey thinks GenAI could add $4.4 trillion to the world economy—yes, I worked on this report—because it will change the “anatomy of work,” “augmenting and automating” workers.
There has been a staggering amount of investment in AI, north of trillions of dollars, because of this one promise: that it will replace humans. The question we should be asking is why we keep investing in this technology. Why do we want to replace humans at all? This is the social construction of technology approach: forcing ourselves to understand the underlying motivations.
The answer is deceptively simple. We want to replace humans because humans are inefficient under capitalism. We are “sand in the gears of the techno-capital machine” (a quote from someone I interviewed for my Master’s dissertation.) We do annoying things like sleep, and eat, and take holidays, and dream, and love, and sometimes we try and strive beyond ourselves, to something like divinity. And we die. All of these may be the point of a life at all, but they severely endanger our economic potential.
This may seem self-evident. Of course we are inventing AI only because capitalism sees the end-goal as functionally useful. But it is in its self-evidence that we recognize just how insidious this truth really is.
Think about the latest product releases from OpenAI and Anthropic. ChatGPT for Excel. Testing ads in ChatGPT.Claude Cowork. Introducing shopping research in ChatGPT. Codex. Claude Code. All of these have been developed and brought to market because of an underlying capitalist drive to promote either worker efficiency or replacement. We code faster. We do our jobs better. We create knowledge for economic productivity. The research these companies spend their trillions on is not in service of human happiness. There is nothing about creating a better world. It is only about solving problems deemed rational by the capitalist dogma.
Mark Fisher argued in Capitalist Realism that what neoliberalism had done was “eliminate the very category of value in an ethical sense” (p. 17)—that is, restructure the notion of value to align only with a capitalist ontology. Value, under capitalism, is completely synonymous with profit increase and productivity gains. Morality, divinity, justice might be lovely ideals, but under total capitalism they have no intrinsic value beyond what they can convince the consumer to buy more effectively.
It is under this ontology that we evaluate the outcomes of AI. Has the American GDP increased? Are workers completing work faster? Can we reduce headcount, cut costs, increase revenue, charge more? So rarely do we speak of achievements of AI outside of this orthodoxy. It is hard now, in the world we have grown up under, to imagine things could have ever been any different. Fisher makes this same argument by opening Capitalist Realism with the great Žižek quote, “it is easier to imagine an end to the world than an end to capitalism.”
But our thinking is limited only in theory. Imagine, for example, if we had created AI that had no interest in productivity. This AI might have been invented for one end only: to democratize coding, as the new language of the world remained inaccessible to those unwilling to speak machines. In the same way Python was an abstraction of compiler languages, generative AI might have just been a further abstraction of coding into natural language. The impulse to build is quintessentially human: treehouses, sand castles, pillow forts, empires. A democratized version of coding would be this impulse to build, realized by millions more people. It wouldn’t even be unique to the technology: YouTube democratized film distribution, 808s democratized music production, blogs on the early Internet democratized journalism. AI under this credo would have served technology’s promised liberating purpose.
An AI built for creation’s sake only would have altered how we understand the technology at all. There would be no desire to replace software engineers, the best of which recognize their work as artistry just like any other field, just as YouTube did not kill the Hollywood director. There would be no questions of efficiency or productivity gains. We would not see humanity as a problem to be solved. Instead, we would see it as the spark that lights a fire of creation, and AI as the underlying method to make the fire burn a little brighter, last a little longer. The best artist you know might have been able to use this form of AI to build something that was otherwise inaccessible to them, something strange, and a little sad, but mostly beautiful. Something that forces you to look at the world just a little differently.
In philosophical logic, there is a distinction drawn between that which is necessary and that which is sufficient. Necessity implies requirement: fire necessitates oxygen to burn. Sufficiency does not, but can produce a phenomenon under the right conditions: a spark suffices to light a fire. The actions of the AI labs suggest that humanity is sufficient for meaningful existence, but not necessary. That there is a world without humans—or at least without our cognitive labor—and this world is possible, even preferable. I disagree with this completely and explicitly. I know there is no form of meaning higher than art, and art necessitates the human.
But humanity is certainly not necessary for capitalism, although it is sufficient. Without us, a set of rational agent actors could happily create an economy that perpetuates itself until the heat death of the universe. The economy would probably be more beautiful and more formulaic in the algebraic sense than any human-led economic activity. But it would be, in every other sense, dead.
The labs—well, the pursuit of AI generally—operates under a set of assumptions that do not see humanity as necessary. Some of labs try and put humanity back in the process by stating the end goal of their AI is “human flourishing.” But existing under a set of logical priors that indicate a system unconcerned with wellbeing turns these lofty goals hollow. It is our obsession with optimization, replacement, efficiency, and growth that presaged the kind of AI we were willing to build. And the AI we have built has been used for human replacement and murder, even from the companies most focused on human flourishing.
These companies, despite their best intentions (and many of them have questionable intentions anyways), operate under venture capitalism. They take money to make money. This means the development of their technology will always be oriented towards economic growth and productivity, and that any product they build will have this as its implicit end goal, no matter how much extra they are able to squirrel away for research on the good life.
The world I envision, where technology is created without these capitalist logics, requires a much greater reorganization of the system than social impact venture capital or whatever band-aid is the new flavor of the month. It requires a genuine investigation into how and why we create technology at all. The state apparatus, in theory dissociated from economic logic, could become the engine of technical innovation—although this always seems to end up in war technology. But the private sector has shown itself to be a horrific guardian of humanity, in part because humanity, with our innate inefficiency, is fundamentally corrosive to market logics.
I dream of a world where humanity is necessary, not just sufficient, for our governing economic logic. It is only through the force of this necessity that society can shape the existence of technologies that are truly, fundamentally liberatory.



HUGE melancholic sigh, but HUGE claps for every single point made