In late April, at a university alumni awards banquet, my friends and I had a conversation with an engineering alumnus from the graduating class of 1974. We spoke for perhaps ten minutes before we were whisked away to our assigned seats for the banquet and ceremony, which lasted two hours. I quickly forgot most of what was said in those two hours, but I couldn’t stop thinking about that ten-minute conversation.
When we spoke to him, the alumnus was currently working as an embedded firmware engineer. When we asked him if he thought anonymity online would soon become obsolete, he said:
“It’s already obsolete. There is no such thing as anonymity online.” And regarding protection of patient privacy in medical records, he said: “There is no way to protect information online. Anyone who wants to know my medical history can find it.”
With this pessimistic introduction, he suddenly took a warning tone. Engineers enable the trajectory of technology toward evil and, even if they do so unwittingly, they are responsible for it. Engineers, more than anyone, are responsible for their ignorance.
“One of the most important skills for an engineer is being able to detect falsehood and realize that nothing can be taken at face value, because people will tell you things that are completely wrong—maybe they don’t know they’re wrong, but they are. And engineering is the only profession in which you have criminal liability for being wrong. Other professions have liability for malpractice, but only engineering has liability simply for being wrong. So you damn well better know what your model means.”
“No large-scale evil is possible without engineering,” he continued. “Evil is enabled by engineering, and the engineers often don’t realize what they are contributing to. Facebook didn’t start out with the intent of creating a cyberbullying machine, but they made a damn good one. Be careful what you engineer, because someone might do something terrible with it.”
“This is the kind of thing you’ll never hear from young engineers and academics,” I confided to my friends afterwards. It’s easy to brush it off as the cynicism of the elderly instead of taking it as the sobering warning that it is.
For our summer book club, my friends and I chose to read Permanent Record by Edward Snowden. This had been on our TBR list for several months and choosing to read it this summer didn’t have anything to do with the alumnus conversation, but it proved timely.
Ed describes the early internet as an un-regulated, private experience that was later disrupted by the involvement of government and e-commerce. Long before he records the U.S. government’s abuse of its citizens’ electronic data, he points to the tyranny of technology over a people that uses it without understanding it:
“Ours was now a country in which the cost of replacing a broken machine with a newer model was typically lower than the cost of having it fixed by an expert, which itself was typically lower than the cost of sourcing the parts and figuring out how to fix it yourself. This fact alone virtually guaranteed technological tyranny, which was perpetuated not by the technology itself but by the ignorance of everyone who used it daily and yet failed to understand it.”
– Permanent Record by Edward Snowden
The government’s abuse of the internet was the inevitable result of a citizenry that was all too willing to relinquish their autonomy for the convenience of technology.
In a previous post, The Illusion of Human Power, I explored the idea that human beings have gained the illusion of control by aligning themselves to absolute forces which they are powerless to alter. Power in every system, political or technological, is the same substance—the product of our alignment to some immutable force. And no system that produces or distributes power is closed.
Technology inevitably has profound political and social consequences because it dictates the way that power is produced and distributed. A technology that must be operated by specialists has its political correlate in oligarchy, even dictatorship.
I’m again reminded of the words of our friend: “No large-scale evil is possible without engineering. Evil is enabled by engineering, and the engineers often don’t realize what they are contributing to.”
It is true that technologies like the internet and artificial intelligence (AI) evolve faster than we can understand or apprehend, but this does not absolve engineers of culpability.
A technology that must be operated by specialists has its political correlate in oligarchy.
Engineers are trained to design technology for the end-user, but one of the greatest accelerants of oppressive technology is designing technology for the user’s convenience, not for their empowerment. Making it easy to click buttons is not the same as making the working principles of the device easy to understand. No matter how simple the user-interface is, the layers of intricate technology under the hood ensures that the specialized few will always hold the power.
The CrowdStrike outage on July 19th proved how reliant society has become on technology they have no power over and are barely aware exists. This summer was the first time I had ever heard the name CrowdStrike—and yet it operates on the computers that scientists and laymen alike use daily. The millions of users across the world had no say in the deployment of the system update that led to the outage and were dependent on a very small group of people to remedy it.
There is a place for specialized technology—in healthcare, for example. The danger lies in technology intended for the people’s use, which the people cannot or will not understand. A people that is willing to extend its circle of trust from the medical professionals to the electronics salesmen is ripe for tyranny.
One of the greatest accelerants of oppressive technology is designing technology for the user’s convenience, not for their empowerment.
I am not advocating a Ted Kaczynski-esque abolition of technology, but rather a re-structuring of tech development that prioritizes user control of deployment, operation and repair. People should be able to use Windows 3.1 for as long as they want, if it’s a system they understand and can repair themselves. The big questions in tech development should be:
How do we design technology that puts the user in control?
And what is the analog contingency plan if the technology fails completely?
This will likely mean a simplification of the technology we see in everyday life, but it will also mean more jobs and a diminished wealth gap, as tech operation and upkeep are distributed amongst laymen rather than specialists. Development of analog contingency plans will also result in the return of jobs that the digital age rendered obsolete.
Evil is accelerated by systems-level control by a few instead of individual control by all. This is as true in tech as it is in politics. I’ve complained about this before, in the context of AI for automation rather than augmentation, in my piece Wealth Without Work Makes Us Miserable. There I specifically pointed to Sam Altman’s plan for infinite wealth detailed in Moore's Law for Everything, which, it turns out, is just serfdom. But the trend extends far beyond AI, to every new technology that enters public circulation to make our lives easier. Luxury comes at the price of increased technological complexity, which is practically untenable for most users to unravel, and the educated elite are more than willing to say, “Let our engineers handle that.”
One may argue that individuals, at least in the U.S., are free to refuse the over-engineered products they are offered, and yet it is becoming increasingly difficult to participate meaningfully in society without them. As an alternative to living as an extremist, we accept the necessity of using technology that we don’t fully understand and over which we have limited power.
The feeling of powerlessness engenders desperation and its fruits—extremism and violence. We saw this in the crimes of the Unabomber during the 1980s and 90s, when the internet was just coming into existence. And we have seen it in the recent incidence of violence against the United Healthcare CEO. Although the latter may not have been directly motivated by technological tyranny, it is symptomatic of a system that disempowers individuals under the guise of benevolence.
Most young tech developers, whether in academic programs or in regular 9-5’s, believe that they are making the world a better place by designing technology that makes people’s lives easier, when in fact, they are enabling exploitation on a massive, global scale. But rather than being the inevitable result of technological advancement, I believe this is just bad engineering. And bad engineering always, always, hurts the people more than it hurts the engineers.
I usually write at least one piece each year about how tech development driven by materialism and greed harms people and harms the earth. But it isn’t enough to say “Big Tech Bad” without offering at least some hints at solutions.
Bad engineering always hurts the people more than it hurts the engineers.
I like what Erik Brynjolfsson has to say about AI for augmentation (see: "The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence"), and I think his ideas can be extended to other forms of complex technology people use every day.
I’m drawn to the idea of augmenting, rather than automating, human labor because I believe humans are better off—emotionally and physically—when they are empowered to provide for themselves with their own two hands, without relying on massive technological systems. This is why theories like post-labor economics make me nervous. The difficulty, of course, is that massive technological systems like the internet and AI are not going away, and while I believe engineers can and should alter the course of these technologies to empower individuals, we are not going to get the Miyazaki-esque, agrarian society we might yearn for.
We can self-regulate our access to technology, but how to achieve independence from big tech at a systemic level requires more thought. I believe that engineers are primarily responsible for bringing about this change.
“A change in the law is infinitely more difficult to achieve than a change in a technological standard.” – Permanent Record by Edward Snowden
> a re-structuring of tech development that prioritizes user control of deployment, operation and repair.
These are the ideals that drove the early free software movement ("free as in freedom, not as in beer" was the saying). At the time, the approach seemed brilliant: free software was licensed so that those who used its source code had to free their own software. So copyright law was co-opted into expanding freedom in contrast to it's original intent. We slyly called these licenses Copyleft.
A few years later, Amazon started AWS. Since they provide access to services rather than distributing software, the free software licenses don't obligate them to distribute their own source code. In the two decades since, there has been a massive centralization into cloud platforms owned by Big Tech. These platforms are quite literally built on a foundation of free software but couldn't be further from the ideal of "free as in freedom".
In the early days of public access to the internet (which i remember!), it was going to democratize information. At that time, it felt like newspapers and the evening news controlled access to information. We celebrated the anticipated decline of the old media and their power (tv stations and newspapers used to be some of the most profitable businesses). The great irony is that many of these businesses were local and independent before they were disrupted by technology, forcing both the consolidation of ownership in traditional media and centralized control over information like we never imagined, first in Google's control over search then in the rise of centralized social media platforms.
We thought the internet would put all human knowledge at our fingertips, but didn't anticipate that it would also bring every form of ignorance, vice, bigotry and hate to everyone's fingertips as well.
It seems like even when there are engineers who are individually idealistic, that gets subverted as society aligns itself to the structures that those engineers create.
We have built Systems that are designed to collect, collate, and analyze information. Such processes can be manipulated or leveraged.
This process can be venal, or it can (and most often is ) inadvertent. For example, a large news org wants the highest rating. It is not explicitly motivated towards deception. But, on the other hand, they may be less concerned if deception is the result of their drive to achieve higher profitability.
But as a consumer… what choice do we have? There is no way that we can access, or digest the tsunami of available information of our modern society.
This is similar to the dilemma we face in the quality of the food that we consume. It costs effort and money to discern and obtain good quality food. Our diet reflects the fact that we are typically lazy and cheap. Food processors do not intend to produce low quality food— their intent is to give us what we “want.” Even if that may ultimately conflict with our best interests.