Can a Robot Do a Designer’s Job?

Automation isn’t as much of a threat as we have been told.

“Wouldn’t it be better if websites just made themselves?”

That was the pitch for The Grid, a so-called website creation service “powered by artificial intelligence” that launched through a crowdfunding campaign in 2014. Less than five years later, after selling many memberships and releasing a very lackluster initial “Version 2” (was there ever a Version 1? some questions are not meant to be answered), The Grid ghosted. They never tweeted again. They locked their customers out of their own websites. You can still read their parting words on their website if you want to get a feel for a less-than-classy farewell to thousands of people who essentially had money stolen from them.

Now, as hard as it is to resist rubbernecking this wreck, The Grid’s backstory isn’t really the point here. They failed to deliver a product and the aftermath was ugly. But it’s worth asking two questions: First, was it ever actually possible to deliver what they promised? And second, would a working service like The Grid have made designers obsolete?

Let’s Get Our Techno-Terminology Straight

The trouble with automation, algorithms, machine learning, and artificial intelligence is that they are ideas and terms that are too often used interchangeably and most people don’t actually understand the difference between them. But each of these words refers to a very specific idea, and understanding that idea is necessary to sorting out what is and is not possible or what should and should not be considered a threat to any given profession.

So, here is a necessary and pedantic digression about words and ideas.

Automation is a very general concept. It means the practice of reducing human intervention in processes. Automation can be very good! The water wheel is an example of automation. So is the printing press, the assembly line, the car wash, and so on. Modernity is almost entirely defined by automation. But in recent years, the drive to automate work that was safely and effectively done by people in order to boost efficiency and profit has created plenty of legitimate angst. It’s reasonable to question how far automation should be pushed, and whether it ultimately offers convenience to the privileged at the expense of the dignity of everyone else.

Algorithms are, basically, instructions. At their most simple, they are recipes for actions that are defined in programming languages so that things can be automated. A very good historical example is the comparison of how Yahoo! initially indexed the web, and how Google did it later. Yahoo! paid people to gather web pages and build lists of them in various categories. When they began to do this, the web was small enough that manually mapping it didn’t seem as absurd a task as it looks in hindsight. By the time Google’s creators were working on their graduate project at Stanford, the problem they sought to solve was automating that task. The first algorithm they created was a relatively simple one to understand. It instructed a computer program to “crawl” the web by following hyperlinks in order to determine what pages existed and what information they contained. Their second algorithm was much more complex and is a fiercely protected trade secret. It’s the one that matches web page content to search queries. Though somewhat mysterious, this algorithm isn’t magical, it’s just much more complicated. There are far more variables within it, but the critical thing to understand is that a human put them all there. An algorithm is an instruction manual written by a person for a machine.

Machine Learning is a metaphor. When you and I learn, it’s because we notice patterns and ask questions about them. We make connections through our inherent, conscious inquisitiveness. When a machine “learns,” it is because it has been programmed to follow algorithms that refer to statistical models and large data sets by which the machine can more rapidly identify patterns than a human can. Again, the machine doesn’t create the algorithm, humans do. The statistical models are also provided by humans. The data sets are sometimes gathered by humans — though often by machines now — but they consist of information that has been created by humans. Machine Learning is very much like the automation we experience when we use a calculator; a long, complicated, often intellectually challenging process is condensed for us. Machine Learning is good! We have it to thank for major medical and scientific advances that our slow and error-prone minds may never have made without the precision of machines.

Artificial Intelligence is probably the most meaningless term of this group. It means nothing more than the differentiation between intelligence demonstrated by humans and machines. The problem, of course, is in the meaning of the word intelligence. When we say intelligence, do we mean the ability to think? Or do we mean the ability to be perceived as thinking? A calculator is obviously not intelligent, but when it produces an exponential function for compound interest in less than a second, it certainly feels like it is! This is the basic principle at work in the Turing Test. Originally (and aptly) called “The Imitation Game” by Alan Turing, it was conceived as a way of evaluating how effectively a machine could appear intelligent. Today, this often plays out in text-based dialogues with machines, by which human judges determine whether they are conversing with a human or a machine. “Beating” the Turing Test says nothing about the actual intelligence of a machine so much as the discernment of a human. Even today, we have no established means to evaluate whether actual intelligence is at work, and because we can hardly agree upon what it means to be conscious the leap we often make in equivocating between the two is always a mistake. If by Artificial Intelligence we mean a machine’s ability to perform tasks possible of a human brain but faster and more consistent in accuracy, then it is both possible and ubiquitous. Pocket calculators and digital clocks are artificial intelligences, and so is Siri. But if we mean a conscious machine, I will refer you to centuries of accumulated philosophical thought (from humans) on the matter.

OK, so those are our terms. Each of them refers to real things. And each of them is also frequently used to refer to unreal things — speculative ideas that may, someday, become real, but as of today are not. That creates fear that is unproductive and hope and excitement on which a lot of resources are wasted.

But What is Actually Possible?

In a debate on whether we should trust the promise of artificial intelligence, Jaron Lanier put the real/speculative divide well in a rant that got more than a few laughs:

“AI is, more than anything else, a funding category for research…it incorporates a wide range of disciplines and pursuits that might or might not have been bundled together and they were bundled by historical accident in many cases. AI steps on its own foot periodically; what happens is that there a kind of crazy wild-eyedness of like ‘we’re about to understand how to replicate a person’ and then the funders are like, ‘boy, you sure didn’t deliver,’ and there’s this thing called an AI Winter that keeps on happening and then you watch your grad students have their careers ruined and then it happens again. We step on our own feet constantly with this fantasy life and if we could only just be good engineers and scientists we would free ourselves from this burden of constant self-destruction.”

Lanier’s take on the automation spectrum is that it continually spawns boom-and-bust cycles because of our willful misunderstanding of what we are pursuing and its utility. It’s simply misleading to promise AI when what you were only ever capable of delivering is limited automation based upon a simple algorithm. “AI” is often invoked to help fantasy masquerade for reality.

The Grid’s creators were irresponsible with their language; they promised AI when they actually meant a very limited, template-based automation. At the time, there was no end to the thinkpieces on Medium about whether The Grid had put web designers on notice. And while I was always chomping at the bit to publish an anti-Grid rant of my own, the most thoughtful take on it at the time — before it was obvious that the emperor had no clothes — would have been to point out that The Grid only ever promised to “AI” a tiny piece of “design” work. Web design is more than arranging content on a page, but when The Grid proposed that a website could just make itself, what they really meant was it could kind of look like that if we, the observers, were OK with giving up all control over the outcome.

Whether The Grid failed because they spent all their funding money, or lost interest in the problem, or it got too hard, or their solution didn’t actually work, or what they promised actually can’t be done is ultimately irrelevant. What they made was a set of templates and a graphical user interface for the construction of a very simple website that took away your ability to control much more than a set of colors. Their bet was that a sufficiently robust set of templates is indistinguishable from custom design. It isn’t. But they didn’t really make a good faith wager. We don’t actually know how robust that set of templates was. We don’t actually know how complex the algorithm was that stood in for a user’s choices.

But The Grid never promised strategy, functional planning, information architecture, content strategy, creative direction, art direction, or writing. A website was never going to make itself. And so even with a Grid that wasn’t a sham, a designer still had much to do at every stage of a typical project. The Grid would have only been a potential tool at one stage, and given how it actually worked, a tool likely to be tried and quickly discarded.

It’s a fabulous fable of the grandiosity, hubris, and avarice of our technocapitalist culture. But it’s also a perfect object lesson on the automation to AI spectrum and our true standing on it today. We are capable of building really effective automation tools. We are not capable of replicating or replacing humans. There are many other real-world examples of this, from call centers to chat bots, but I’ll assume your own chaotic and frustrating experiences will supply all the evidence you need.

Automation is useful when it does something faster, more precisely, and more consistently than we can. There is much we all stand to gain from automation, if we can maintain a clear sense of purpose for it in our world. If we use it to assist humans, it should never be a threat. If we use it to replace them, then I can’t begrudge any person who rages against the machine (though I’d gently advise them to rage against the machine’s makers instead).

Automation Doesn’t Have to Be a Threat

A professor of mine once said that “The only way to make good things is to make many things.” He was talking about making drawings, and how the more you make, the more you are able to exercise and set aside forces of ego and will that keep images from being art. In its most practical application, one could interpret his words to simply mean “practice makes perfect.” But knowing him as I did, the goal wasn’t perfection as much as it was potential. I experienced it for myself. When I followed his instruction, and made so many drawings that I began to feel like a machine, I also began to make discoveries in them — to notice things and make connections in marks and lines that I hadn’t consciously intended, but had begun to emerge as my mind wandered and my body took over.

I think about that automation-like experience often, and I wonder if it has something to offer when it comes to assessing the threat of machine-automation on other creative practices, like design. Could a machine that can generate layouts — that can rapidly distribute and arrange information on a surface over and over again, providing an abundance of options faster than we would have the patience to create ourselves — help us to make better work? Today, we hear of machine-learning producing scripts and images and songs, and this is almost always to serve a punchline on Twitter, but the notion is that when fed enough raw material, the machine can generate faster than a person. That could be a good thing. It could enable us to focus on critical matters of choice and taste; we would decide what a machine is fed and what is kept of what it produced.

Automation will not result in an instantaneous obsolescence of human designers, any more than computer-assisted data crawling and simulations have made scientists obsolete. In the near-term, the biggest gain automation will bring to any profession is time. It will offer humans the ability to try more things faster.

“AI” generative design — the stuff of The Grid’s promises — will come. And just like The Grid, whoever packages it up and sells it will be the recipient of many angry takes and Twitter threads. But if someone can apply the technology in a principled way and market it based upon an emphasis on preserving the heart of what a human can do that a machine cannot, I think designers can welcome automation. It can be their calculator. It can iterate faster than a human mind can, so that the human minds who use it can focus on the application of taste. It can help humans make progress faster. Not all the time or every time, but sometimes.

Whether that tool — automation provided by algorithmic machine-learning — is actually an example of “artificial intelligence” will still be a matter of debate. Whether it is “conscious” will remain an absurd question. The conflation of terms will persist and continue to be what is actually at the heart of the public discourse around technological unemployment. No robot steals anyone’s job. People do that to one another. But no machine need undermine a person’s purpose. We can have humane automation, if we’re willing to let go of science fiction.




Recent Tabs

Here’s a humanoid robotic arm that uses artificial muscles.. The Future is Not a Solution. An Art-Making Robot Was Detained on Her Way to Show at the Pyramids Because Egyptian Customs Officials Thought She Was a Spy. The Climate Disaster is Here. This interview with Gary Paulsen is short but deep. Xerox Alto Is Rebuilt and Reconnected by the Living Computer Museum. Global power consumption ‘to almost double’ by 2050. How Organizations are Like Slime Molds. “If I’m being deeply honest with myself, a huge portion of my goodwill toward this movie can be chalked up to the many extremely sick shots of spaceships arriving/departing…Hey Facebook, I Made a Metaverse 27 Years Ago. It was terrible then, and it’s terrible now. This Griffon Vulture with a massive wingspan being released into the wild.

Written by Christopher Butler on October 29, 2021,   In Essays

Next Entry
What We Do When We See What does it even mean to design when perception is reality? Do we all ever see the same thing? In, Visual Intelligence, a wonderful book about
Previous Entry
A Pep Talk for those Who Work Bullshit Jobs How do you know if what you do is actually meaningful? The late anthropologist David Graeber once wrote that “huge swathes of people in the is the personal website of Christopher Butler.
© Christopher Butler. All rights reserved.
About this Website
Subscribe to the Newsletter