Yesterday, I read this article:
The message on the surface was spot-on: technology is being built upon other technology to compensate for the shortcomings of that technology. As a result, the technology “wants” to serve itself as opposed to the humans who built it. The implication is that the confluence of technology being designed to serve other technology will result in some sort of emergent digital sentience that builds itself upon itself with zero regard for the humans it was once supposed to serve. The implication is that it is already happening.
While this article all but invokes Skynet, it seems to ignore the fact that technology is not actually designing itself. It just appears that way because of an entirely different automaton, namely the rank-and-file UX designer.
You’ve heard this from me before, and I’m going to say it again. The quality of the designer pool is going down, not up. “Bootcamps” are pumping out low-cost and low-quality so-called UX designers, and flooding the market. In the process, the expectations of corporations for a UX designer are degenerating from a respected specialist to a mindless assembly line worker who is intended to push pixels around a predetermined framework. This might explain the perennial calls for UX designers to be graphic designers: it keeps them so focused on superficial decoration that they don’t have the time or functioning skills to actually put up a fight for a better user experience.
This horde of cookie cutter peons is utterly unable to cope with the forward march of technology. Ian Bogost’s article observes that technology is growing exponentially in complexity and pervasiveness and that each new generation of technology is never fully understood by the people designing it. This lack of understanding inevitably leads to massive design failures which are addressed not through a thorough redesign at the fundamental level, but yet another layer of even newer technology. If the tech companies and their peons could not cope with the last generation of gadgets and algorithms, they are even less capable with the next.
Imagine a wizard who summons a gremlin to work in his laboratory. Instead of working, though, the gremlin starts sabotaging the wizard’s work, ripping pages out of spellbooks, knocking vials of potions on the ground, and generally being a pain in the ass. The wizard could simply kill the gremlin, admit he was too naive about conjuring, and study up on the subject before summoning a demon to work for him. But instead, he summons a goblin to subdue the gremlin.
Predictably (to us), the goblin turns out to be just as insubordinate as the gremlin, and starts doing even worse damage. It is also a bit more physically threatening, so the wizard would be putting himself in danger if he tried to kill the goblin. So he summons an orc to kill the goblin. The orc kills the goblin but then immediately turns on the wizard who, now terrified, summons an ogre to kill the orc. When the ogre inevitably attacks the wizard, he reaches his last resort, and summons a dragon who proceeds to incinerate the ogre, the wizard, his laboratory, and the surrounding countryside.
Those mythical beasts are the technology of the 21st century and the wizard is the industry. At no point has the tech industry taken any steps (beyond lip service) to rein in the out-of-control technology that it has allowed to run rampant. The wizard’s inadequate skills are represented by the mediocre designers who couldn’t fully think through even previous generations of technology and now have to somehow design a cage around it using novel concepts they understand even less. If the wizard couldn’t conjure a docile gremlin, how could he ever hope to conjure a docile dragon?
The tale of the Wizard and the Gremlin does not tell the whole story. It would be nice if we could believe that the industry was so well-intentioned and they simply let their work slip out of their grasp, no matter how hard they tried to rein it in. We all know perfectly well (as Bogost’s article acknowledges) that douchey techbros have turned the customer into the product.
Companies like Facebook and Google do not serve their users; they serve their advertisers. In turn, they have created technology that is adversarial to humans. Anyone who throws up their hands and claims “I had no idea!” in light of their loathsome original intentions is like a parent who beat their child and then cries crocodile tears when they shoot up a school. Facebook is more like a wizard who started off summoning an ogre as a personal thug.
Beyond the techbros themselves are the higher-level product designers whose tendency toward social engineering and other insufferable do-gooding has led to product designs that are inherently anti-human. When you hear pinheaded pundits warbling about the “sharing economy” and “end of ownership”, is it any wonder that we’re headed to hell in a handbasket? AI is being patterned on Vladimir Lenin.
If that were not bad enough, there are modern day Quislings actively calling for laws to give rights to robots. One execrable TED Talk (to which I will not link) made the not so veiled claim that there should be laws against attacking robots “because it teaches people to be violent”.
As much as some people want to believe that technology is an inscrutable, sentient force that has grown beyond our control and we must surrender to our robot overlords, the fact is that there are people at all levels of the corporate hierarchy actively building Skynet through a mix of greed, featherbrained utopianism, and good old stupidity.
For those of us who recognize the threat of renegade machines and who possess the skills to address it, it is incumbent upon us to spend our waking hours trying to turn back the clock where we can, re-injecting manual control into our devices and software, and to stem the tide in other places, making sure that the subsequent layers of technology are designed sanely.
What this means is that you need to be able to say no. You may receive orders to design bad things. Things you know are anti-user. Things that will likely be cloaked in touchy-feely language to sound like they are good for people. These ploys tend to give themselves away with pitches that include “knowing them better than they know themselves” or “problematic”. You must learn to spot these initiatives and extirpate them before they grow.
As software invades increasingly vital (and lethal) facets of our lives, the price of bad design will go from expensive to unaffordable. As the principles built into tech when tech meant apps for ordering beard balm begin to osmose into military weapons, the old adage may cease to be true: guns really will kill people.