Few would deny that we are facing a technological revolution driven by artificial intelligence; it is only the scope of the revolution that is debated. In many science and tech circles, the conversations are dominated by comments like: “I don’t think AI is as powerful as people think it is,” or “AI is just another, slightly more advanced tool.” The unspoken assumption is that AI will continue to develop as people in these circles have known it to develop—as a tool, and not an infallible one. The other extreme, which has gained popularity since the launch of ChatGPT, is that AI will advance toward human-level intelligence and replace humans as the dominant intellectual species.
But the conservative view is as dangerous as the sensational. Both assume that the trajectory of AI development is inherent in the very nature of the technology, when in fact it is almost solely dependent on the social framework. The course of advancement is not fixed: as in every technological revolution, the impact of AI will depend on how it is incorporated into our lives.
At the point that anything becomes a discipline it can be imitated by a machine, and human intelligence is nothing but discipline.
Early Biases for AI Progression
The development of AI is guided toward either automation or augmentation of human labor, where automation seeks to replace human labor and augmentation seeks to aid it. At present, neither path is fixed, but there is a strong bias toward automation. From the beginning, the idea behind strong AI was to create a machine intelligence at least as capable as human intelligence: not limited to individual tasks, but broadly capable, which gave rise to the term “artificial general intelligence,” or AGI. Without considering if this is actually possible (especially as we have yet to prove that Theory of Mind is something an artificial system can ever achieve), we can see that the inception of such an idea biased the progression of AI toward automation of human labor rather than augmentation. The final goal, whether researchers admitted to it or not, has always been to replace humans by creating an artificial intelligence that could do anything a human could do and more. Most AI researchers, indeed, make no effort to conceal that AGI is the goal.
For many years, the prevalent view has been that this is not possible, that human beings can never be completely replaced by machines. It is a comfortable view that has enabled society to turn a blind eye to AI advancement until very recently, but as the array of things AI can do that were previously deemed impossible expands, we would be wise to consider the possibility that all human intellectual labor will be replaced by AI if we continue on the present trajectory. In “Computing Machinery and Intelligence,” Alan Turing described the Imitation Game as follows:
“It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman….We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’”
Turing’s idea in 1950—which, one might argue, had already been put forth by analytic philosophers much earlier—was that, in purely intellectual processes, machines and humans could become indistinguishable, and that humans, insofar as they are laborers, are machines. The whole purpose of the education system, of professional training, or of any system designed to refine human intelligence is to render the mind a machine, reliable and useful. At the point that anything becomes a discipline it can be imitated by a machine, and human intelligence is nothing but discipline.
But, whether or not one agrees that humans can be entirely replaced by machines, the more pertinent question is whether or not replacement, on any level, should be pursued. AI will only advance toward the goals we set for it; thus, human-like intelligence is not an inevitable outcome of AI advancement (there is no inevitable outcome) but is simply the goal researchers have set. The question is not whether we should hinder the advancement of AI, but whether we should change the goal of advancement. Should we advance toward human-level AI that does our work for us (automation), or toward AI as a tool that helps us do work we can’t do (augmentation)?
Automation and the Materialist’s Utopia
A popular view amongst those who favor automation of human labor is, essentially: wouldn’t it be great if everyone had access to unlimited wealth, with little to no human labor required? It sounds ridiculous, but this is exactly what Sam Altman, the CEO of OpenAI, laid out in his article Moore’s Law for Everything back in 2021:
“As AI produces most of the world’s basic goods and services, people will be freed up to spend more time with people they care about, care for people, appreciate art and nature, or work toward social good.”
In this utopian society, every material need would be met and everyone would be free to spend their time however they chose. The difficulty is how we are to shift into such a society without passing through an indeterminate period of time in which vast swaths of people are jobless and penniless, without any means of obtaining goods and services. Even if we did succeed in shifting into such a society, the distribution of wealth that, for instance, Sam Altman suggests, would be controlled by a very small group of people.
“We should therefore focus on taxing capital rather than labor, and we should use these taxes as an opportunity to directly distribute ownership and wealth to citizens.” - Moore’s Law for Everything
However noble the intentions of those people might be, such a centralization of power is simply not sustainable, as history has painfully demonstrated. In fact, when we reduce Altman’s ideas to their most fundamental meaning, we find that it is only a glorified system of serfdom. He paints, in a new light, the same disastrous system to which the entire western system of government, philosophy and education exist as an antidote. The present system exists because the previous one was too painful to bear. Consider how difficult social change is to accomplish, especially from the bottom up, and then consider that it happened to such a complete degree because what existed before was so terrible. If it seems like I’m picking on Altman, consider him a figurehead for the whole class of people who think they speak for everyone when they say that wealth without work is what people want.
The pursuit of AI as a means of automating human labor began as the scientific exploration of Alan Turing’s Imitation Game, but persisted in society by the same materialistic principles as any other technological advancement. “How can we create an infinitely comfortable existence with as little human effort as possible?” is the question that has historically driven technological progress, whatever the ambitions or ideals of scientists. The trajectory of technological progress is set, in fact, not by scientists, but by those responsible for the integration of technology into society, and it is precisely in this circle that materialism reigns supreme. The fixation on automation stems from the belief that labor is a means to an end, and that the end alone (i.e., material wealth) is valuable. By this logic, the more we can produce, the better; the higher quality we can produce, the better. And the less we have to work, the better.
“Broadly speaking, there are two paths to affording a good life: an individual acquires more money (which makes that person wealthier), or prices fall (which makes everyone wealthier).” - Moore’s Law for Everything
The Social Disease
This kind of materialism is not new; advances in technology have only made its aims more achievable. And yet the consequences of excessive wealth on human communities have been well-documented throughout history; there are countless social commentaries on the negative emotional and spiritual impact wealth without work has on human beings. Not only does it make us horrible people, but it also makes us supremely unhappy. What is often simultaneously highlighted in such commentaries is the oppression of the working class and the unjust redistribution of wealth by the elite. It is rightly pointed out that the elite hoarded wealth rather than distributing it to the workers who had a right to it. In the process of decrying such injustice, however, the other injustice is overshadowed: that just as not enough wealth was physically harming the working class, so too much wealth was spiritually harming the ruling class.
In the past, the remedy for this has been to distribute wealth fairly, so that no one has too much or too little. But suppose technology advances to a point where everyone can truly receive excessive wealth? This is what businesspeople and politicians alike seem to hail as the glorious future. But amid promises of equitable wealth distribution, what is ignored is that a society that has too much is just as certain to be miserable as a society that has too little. Materialism for all is as much an affront to humanity as materialism for some.
The utopian society of wealth without work is only a utopia to those who believe that material wealth is the end-all, be-all, and that labor is only a means to achieve it. This ideology lies at the heart of every struggle, past and present, between the haves and have-nots. Without a single drop of sweat, one class lives in luxury. Another class never stops working and lives in poverty. Both are unhappy for the same reason: the materialism by which society is driven cheats people out of a full life, either by placing them in a position of unrewarded labor or by depriving them of labor as an end in itself. If we advance technology toward the materialist’s utopia—unlimited wealth for all—we are not curing the disease, only exchanging one set of symptoms for another.
Materialism for all is as much an affront to humanity as materialism for some.
One might argue that advancing toward a utopian society in which everyone’s needs are met without anyone having to work is less about wealth and more about time: wouldn’t it be great if everyone had the time to do what they love—art, music, enjoying the company of family and friends—without having to worry about things like food and housing? And yet, one would be hard-pressed to find a wealthy society of the past in which any of these things—art, music, or human relationships—have not rather suffered than flourished. In Clive Bell’s Art, he says of the upper class:
“The characteristic of this Society is that, though it takes an interest in art, it does not take art seriously. Art for it is not a necessity, but an amenity.”
The same idea applies to anything in which human emotion is a vital input: emotional expression flourishes when it arises from a sense of necessity, thus, a society that is too comfortable will exhibit a decline in emotional expression because the very sense of necessity and the struggle for existence has been eliminated. Life produces the emotions we seek to express in creative work or in our relationships with others, thus a dead life without activity or necessity produces dead emotions and dead expression. Wealth without work is an empty goal; it will not make us happier, nor more creative, nor kinder, nor more intellectual.
Augmentation: A Non-Materialist Framework for AI
When we discuss the deployment of new technology, then, the emphasis should not be on wealth, but on holistic human well-being, and the trajectory of technological advances should be shifted from production to actual human benefit. How can we deploy new technology to make people happier? To make people healthier? How can we get people more involved instead of less involved, more active instead of less active? Rather than trying to create a wealthier society, we should try to create a healthier, more sustainable one. The idea of using AI to augment human labor rather than to automate it ties into this initiative.
In “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” Erik Brynjolfsson explains that by developing AI to work alongside humans, as a tool rather than a replacement, society can make incredible strides in medicine, science, and technology, while still retaining and even creating jobs for humans. He aptly points out that AI is capable of doing things that are impossible for humans to do, but struggles with things that humans can do easily. Thus, rather than wasting time and resources trying to get machines to automate things that humans can already do, we should focus on developing AI to do things that humans can’t do and thus accelerate more pressing areas of research, such as in climate change and medicine. Whereas augmentation offers sustainable solutions, the materialism behind automation will only serve to perpetuate the rampant production and consumerism that is presently destroying our planet.
Labor is a Form of Power
Most people would agree that staying physically and intellectually active is healthy and fulfilling, and that it feels much more rewarding when it is done for a purpose than when it is done for recreation. This is because labor is a form of power–indeed, it is its most raw form. When we work as a part of society, the power we generate is transferred to the rest of the social machine and eventually takes the form of wealth, which must then be redistributed for society to continue functioning. We only own the power we personally generate, and we only own wealth insofar as it is a form of that power. Those in positions of wealth distribution act as a conduit for the total power of the machine or a subsystem of the machine, but are themselves only representative of power, without any actual ownership of it.
In general, the farther you are from tangible labor processes (e.g., manual tasks), the farther you are from power production and the closer you are to power representation. Power passes through a spectrum of forms: it is at its most raw in the form of physical labor and at its most representative in the form of wealth. There was a time when a single individual could experience this full spectrum of power: a homesteader could work in the field, for instance, and receive the crops themselves and the monetary profit thereof. As technology has advanced, it is more common for people to experience power toward the wealth end of the spectrum: there are fewer manual labor jobs, and more power transfer and wealth distribution jobs. If AI continues toward automation, this will be even more true, until at last there are no jobs at all, only wealth. If human beings receive wealth but produce no power, human civilization will stagnate and eventually die, because they no longer participate in their own existence. We are already seeing the effects of this, especially in the rise in depression amongst young people, as people experience power less in its raw form and more in its representative. Participation in existence demands participation in the full power spectrum.
The result of a society in which AI is adopted as an augmentation, rather than an automation, of human labor is a social machine that is completely operated and therefore completely owned by humans, with far greater potential for innovation and job-creation than any pre-AI civilization has ever seen. The end of labor is not wealth itself, but ownership of wealth, and ownership is an activity: no one owns wealth unless they have owned it from its conception, from its most raw form, from the very first physical effort that produced it. If it was not your literal, physical effort that produced it, it is not truly yours. Any other kind of ownership is nominal. Still, if our labor contributes to society, we hold a partial ownership of the products of society, because we generated a portion of its collective power. To entrust that labor to machines is to surrender our portion of power.
The Golden Calf
When we discuss the future of AI, the focus tends to be more on the technology itself than on its social integration. We tend to assume that AI is advancing in a fixed direction and that society is along for the ride. For years, we have heard opinions ranging from “AI will never advance beyond the level of a tool” to “AI will eventually gain sentience and kill us all.” Both extremes share a sense of inevitability–that it is impossible to advance beyond a certain point, or that it is impossible not to advance beyond a certain point–and thus both extremes deprive human beings of power.
In the Biblical story of the Golden Calf, the Israelites ask Aaron to make them a god, so he uses their gold jewelry to make a molded calf and the people offer sacrifices to it. When Moses asks him: “What did this people do to you that you have brought so great a sin upon them?” Aaron replies:
“I said to them, ‘Whoever has any gold, let them break it off.’ So they gave it to me, and I cast it into the fire, and this calf came out.” (Exodus 32:21-24.)
When we develop technology, we are not throwing gold into a fire and seeing what comes out. We are making the calf ourselves. Artificial intelligence, like any other technology, will become exactly what we make it.