“Sometimes bad things happen to good robots.” That was the conclusion of the research team that created hitchBOT, a robot designed to explore the world and observe and learn about people. For almost two years, hitchBOT safely traveled across Canada, Germany and the Netherlands, befriending travelers and accompanying them on their journeys. In the summer of 2015, hitch entered the United States, where it lasted just two weeks before being destroyed near Philadelphia.
As far as robots go, hitchBOT wasn’t exactly sophisticated. It was able to have simple conversations — responding to spoken commands and reciting facts — as well as keep track of its location and take lots of photographs. But its body was a plastic bucket with decorative tube arms and legs stuck to it, incapable of supporting its own weight. hitch was named for its limitation; to get from place to place, it had to be picked up and carried by people. So, needless to say, hitch wasn’t taking anyone’s job. In fact, lots of people have done the itinerant anthropologist thing, but hitch wasn’t a threat to them, either. Only hitch could do what hitch was designed to do, which was record the experience of a robot roaming a human’s world. And yet, one cannot help but suspect that hitch’s demise was more than just an act of vandalism. Aggression is a product of fear, and as benign as hitch was, the average human has much to fear in the average robot.
What is a robot, anyway? Is it “a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer”? Or, is it simply “a mechanical or virtual artificial agent”? These two definitions — likely the two most within reach of anyone asking — bookend quite a wide conceptual span, one that could also delineate the evolution in the public’s understanding of robots, which, in turn, is in sync with the evolution of everyday technology. In the mid-twentieth century, the autonomous, complex machine was the height of our expectations for the future. But today, that definition applies to so many things in common use that the unambiguously “other” nature of the word “robot” just doesn’t seem suitable. Would you call a car a robot? Or your phone? I cannot imagine you would, and yet, both are machines capable of carrying out a series of actions automatically. No, the defining characteristic of the robot has kept pace with the evolving complexity of technology. Today, agency is that thing that elevates the robot from the masses of the everyday machine. A robot is an artificial agent. It is a machine that acts on behalf of a human, doing what a human can do, as well as what a human cannot do, or what a human will not do. And that’s where it gets interesting. Or frightening. Or just much more complicated.
A robot — in theory — would be a better worker than a human, impervious to all the things that make you and I unreliably productive: fatigue and boredom, obviously, but also arrogance, laziness, petulance, hubris, avarice, deception, and a whole host of other gifts of the ego. Again, in theory. One could debate the possibility of those things emerging from an artificial intelligence, depending upon what one means by artificial intelligence. I’m going to leave that one alone because it makes things too complicated. For now, let’s just think about robots that are complex enough to pass the Turing Test, which, for the unfamiliar, evaluates a machine’s ability to display intelligence indistinguishable from that of a human. Contrary to popular belief, the test does not judge whether a machine has become conscious. It judges whether a human can tell the difference between a machine and another human. This is why all modern Turing Tests are run on chat bots. Perhaps some day the Turing Test will be run on an android and include the android’s passibility as a physical human, but that day is not today. It should be pointed out that Turing himself doubted whether consciousness — machine or human — could ever be truly proven by a test. He began with the question “Can machines think?” only to eventually narrow the scope of his inquiry to “Are there imaginable digital computers which would do well in the imitation game?” So, when I say robot I still mean a machine that, however sophisticated, is the result of human programming and has very real limitations, despite any appearance to the contrary. As in, not likely to rise up and rebel against its enslavement by humans, but quite likely to mess up all kinds of things because humans made it and humans make mistakes. It is the reliability of humans that makes the agency of robots so interesting.
What, then, are robots good at? Repetition comes to mind first. Robots are excellent at performing repetitive tasks without the degradation of fine motor control that makes anything you and I do over and over and over again for too long follow a predictably parabolic pattern of ascendancy and decline. Human muscle memory is both a feature and a bug. On the one hand, the more we do a thing with our body over and over again, the more control we tend to have in how we do it. Ask any drummer and they’ll tell you that double-beat is the most boring and most essential exercise a drummer can do to improve their control and accuracy. But when it comes to building strength over time, repetition isn’t your friend. Ask any weight lifter, and they’ll tell you that the secret to strength gains is novelty. The body gets the most out of an exercise done in a variety of ways, which is why weight lifters looking to increase their pectoral strength won’t just do barbell presses on a flat bench. They’ll also do pushups, use dumbbells and machines — all in many different configurations — in order to keep the muscle in a state of growth. Otherwise, the body eventually normalizes as it grows accustomed to repetitive movements. So, as artless and boring as this robot is, there’s no question it could play that simple beat more accurately for longer than a human could. It won’t learn to get any better unless a person programs it to, but it also won’t get bored. Ever. Now, a drumming robot is a novelty, but a spring-winding robot is a necessity. No human can make as perfect a spring, as quickly, over and over again, as a machine can, and God knows, we need our springs. Robots are great at physical tasks that must be done the same way over and over again. And with repetition comes many other conditions that aren’t exactly human-friendly. Speed, for one. Repetition and speed tend to lead to injury in humans, even death. Strength is another. Even the strongest weight-lifter is no match for a machine. That’s why robots do a lot of industrial tasks now, like assembly, pressing, sawing, welding and shredding. Is it better that robots do these things? Probably. Robots can make more things better and faster than we can without as much collateral damage. Like, dying while doing it, for instance.
This is all very obvious, but it’s worth thinking about carefully because many people used to do things that robots now do, but there is no reward for the worker who abdicates to industrial progress. That worker often cannot find something else to do. I wonder what sort of person destroyed hitchBOT. I can’t help imagining torn pink slips scattered amongst hitch’s remains.
This, of course, is not just a blue-collar problem. The industrial robot is, by its nature, much more akin to the mid-century machine than it is the contemporary artificial agent. To be fair, plenty of industrial robots do tasks that humans never did — never could do — so their agency is much more a matter of extension than it is emulation. But the modern robot is doing things that are much more human in their nature. Many a telephone operator has been replaced by a robot. It takes many interactions with a disembodied, robotic voice before I can speak to a real human at my bank, and it’s clear that my bank would prefer I never speak to a real human. The robot never veers from the script, never loses its temper, never makes a mistake, never takes a break. One robot for thousands of humans. This is good for the bank, but not good for the humans. Perhaps a former customer service representative killed hitchBOT. Or maybe it was a frustrated customer who had just hung up on a cold, inflexible, unempathetic robot. It’s starting to seem like hitchBOT’s enemies could easily outnumber its friends.
Ask the average person what they’d like to spend their time doing, and I doubt that either repetitive industrial labor nor repetitive service calls would rank high on their list. That machines are now viable surrogates for much of this work makes for what might be called a de-industrial revolution. It was the invention of machines that created the industrial revolution, robbing many of their jobs but propelling all of us into much more machine-mediated lives. Today, the sophistication of machines has reached a point that makes most of them invisible to us: Factories almost entirely machine-run such that few humans need ever set foot in one, or even identify one from the outside. Airplanes software piloted for more airtime than the average passenger would likely feel comfortable with. The voice on the other end of the support line, just an emanation of an unseen server cluster. Not to mention every networked device in our homes — our furniture with a software center — which respond to our command thanks to far off datacenters often populated by just a handful of humans. Today’s industrial revolution hides the industry from its beneficiaries quite well. We have the luxury of delegating more work than ever to machines.
But what of the work that machines are ready to do, which we are not ready to give up? What of driving, for instance? Everyone’s favorite
nascent fascist corporation startup is not only on the verge of replacing its human taxi drivers with software. They also intend to replace every truck driver on the road. Uber recently acquired a self-driving truck startup named Otto for $700 million, and made its first autonomous freight trip delivering 45,000 cans of Budweiser. This development dovetails well with the pressure Amazon is feeling to handle its own deliveries. That Amazon would build its own delivery operation has, if you ask me, been obvious for a long time. (I thought I was stating the obvious when I wrote about it two years ago.) But if you think that Amazon logistics means a jump in the human headcount at Amazon, you’re not connecting the dots. This is the same company that wants to use drones to deliver your orders. The same company that wants their cylindrical ears in every room of your house. No, those jobs are going to machines.
It seems to me that the autonomous vehicle is going to be among the most controversial technological developments of our time. Not that it will be a surprise — I’m sure most people understand what self-driving vehicles are and that they are real — but that the unfolding reality they create will be. They’re coming to our roads, and I’m not sure there’s much we can do about it. Most commercial airliners are equipped with software that automates much of the flying. This has been true for many years, and nobody is very upset about it. This is probably because no airplanes fly without pilots and co-pilots in the cockpit. Though their active role throughout the flight is about as active as the “driver” who rode in the cab of Otto’s truck loaded up with beer, human pilots, there as backup, get to keep their jobs. Yet the argument for autonomous vehicles is that software makes fewer mistakes than humans do. Software doesn’t fall asleep at the wheel, drive while intoxicated, find speed thrilling, or take its “eyes” off the road. Accidents will happen with autonomous vehicles, but, in theory, in much smaller numbers. If that’s true, then it makes sense to hand the keys over. But we don’t tend to do things like that rationally. The Ubers and Amazons of the world will force our hands. They will put machines in their vehicles and send those who drove them away. But that won’t be the end of it. A road with one autonomous vehicle on it creates enough of an imbalance to catalyze a policy machine that will inevitably push each of us to give up driving ourselves. The argument is already that software is safer than wetware. The argument that follows is that software is safer with other software, meaning that though an autonomous vehicle can drive better than a human, it’s safety will be compromised when sharing the road with human drivers. Once laws are passed that allow autonomous vehicles on the road at all — and this will be the slow part, as it will be a state-by-state process with lots of resistance — the laws and policies that follow which will make it harder and more expensive to be insured to drive and harder and more expensive to keep your license and harder and more expensive to own and operate your own vehicle will come quickly. There isn’t likely to be a thriving road shared by both robot and human drivers. Autonomous driving is itself a tipping point. It forces the band of society to go full robot.
On the conceptual family tree of robotics, the self driving car is much closer to autocorrect on your mobile keyboard than the entire robotic assembly line that produced it. It does a thing that you did for yourself, and — such is the sales pitch — poorly. But no one feels stripped of their humanity when autocorrect kicks in. As infuriating as it is that my phone still forces incorrect corrections or doesn’t understand that when I respond with “tite” I intend the misspelling, the stakes just aren’t that high. It’s just a text. And even if a machine sometimes meddles in my communications, I can still talk to people face to face and control every word I say. Autonomous vehicles are like that — making judgments for us — but the existential stakes are much higher. There will be a transitional period, but at some point, we will go from being drivers to passengers. We won’t go easily, but we will go. There will be accidents, and some people will say, “SEE! ROBOTS ARE BAD!” and other people will say “There has been one robot accident this year and one hundred human accidents today.” There will be debates and factions and politicians and union strikes and constitutional amendments. But eventually, we’ll run out of ways to slow this down. Some of us will have to find another way to earn money.
I’m not sure I like that future. But it seems more likely to me than any alternative. A different future, one in which we don’t delegate/lose as much to machines, would require different thinking now. The inexorable rise of the machines is no such thing. We still have a choice as to what we decide to do or not do. The modern robot — the machine agent — is a blank slate. It will do what we cannot or will not do only if we decide we cannot or will not do it ourselves. And it seems to me that we have not truly made that decision. We are told that the robots are coming, and that when they arrive this and that will happen. But why must we accept this? Asimov imagined an ethics with which all robots would be programmed — three laws which would govern the actions of robots and protect humans — but he did not imagine an ethics which would govern the actions of humans who create robots. The first of his three laws specified that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” But what, exactly, is meant by “harm” in this law? What if a robot’s very existence allows a human being to come to harm? What if “harm” is interpreted as unemployment, which leads to much more obvious harms, like depression, poverty, suffering or death? The ethics which address such a question are preliminary to any programming. I am sure it is asked, but by what mechanism is it ensured that it is answered? In a capitalist society, is there any room for regulation of innovation? For slowing it down or altering its course to protect human beings from the collateral damage of technological change? Where might a line be drawn between a world-changing product that creates and distributes more than it takes and one that does the opposite? Who would judge such a thing? I don’t know the answers to these questions, but as technological change accelerates, it seems we can observe enough cycles of cause and effect to conclude that someone must. We cannot be allowed to make something without fully and collectively reckoning with its consequences.
Those who make robots have chosen to make robot-making the thing they do. It’s a thing that can be done, and which will likely result in some reward for those who do it. But in making robots — which must do something, too — they rob others of the things that they can do and, in most cases, must do for lack of better options. In other words, robot-making is a destructive luxury. What do we stand to gain in replacing human work with robot work, other than unemployed people? Certainly, some wealth, but certainly not widely distributed. What right do a small number of wealthy individuals have to shape the future for the rest of us in this way? Why do we not design a system now which properly distributes responsibility for the future? Behind every robot is a human. When considering the work a robot might do, that fact is essential. Because when a human loses their job to a robot, it’s a human taking their job. The robot is simply the agent.
Whether or not hitchBOT was the victim of a growing human angst toward robots, I don’t know. It seems very possible. But it probably doesn’t matter. hitch stood out by requiring humans to participate in its moving around the world. “What’s in it for me?” many of them could have asked. “This is my world,” some of them might have thought. “It’s either me or him.” How silly, when hitch was just a bucket with a face, and meanwhile, we invite millions and millions of much more powerful robots to live among us. The Amazon echo, a
surveillance robot computer assistant, is already in millions of homes. It has begun the diplomatic process which some people hope will end with us accepting machine agents that listen all the time, not just when we ask them to. They hope we will come to believe that, because they answer our questions and respond to our commands, they are our agents. But we should know better. Behind every robot is a human.
On Screen: At first, you may laugh at the opening scene of this short video, which depicts a woman slow-dancing with a robot that looks half Asimo, half Stay-Puffed Marshmallow Man. But give it its full five minutes. I suspect that by the time the video returns to the dancing, you will be moved by the heart behind this concept — that a domestic robot could be a balm on loneliness, not just on housework.
Heavy Rotation: Love Streams, by Tim Hecker, has been the soundtrack to many commutes and writing sessions this week. It’s great music for thinking about the future.
Recent Tabs: This thread is the best thing I’ve read on Twitter in a very long time. And it’s about typewriters. Kinda. Actually, it’s about wonder and delight. Why Friday’s Massive Internet Outage Was So Scary. The New York Times is buying The Wirecutter for $30 Million. This is a good article about web typography. The Perils of Peak Attention. Another deep undercover piece from Mother Jones: I went undercover with a border militia. Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines. Let’s stop pretending that Silicon Valley tech is visionary. After all, the President has. This is a very thoughtful and analytical piece on the difference between vision and execution-lead technology companies with an unfortunately bait-ish headline. This is a cool music video, and of course, it was made by machines. Retracing and recreating historic record sleeves. How you build a robot. The Executive Office of the President’s 60-page report on Preparing for the Future of Artificial Intelligence. “It looked great. It was unwatchable.” Memories. Oh, don’t mind me, I’m like.