Over the past week, there has been a surge of angst in my Twitter feed. Specifically, design angst. The final article in a five-part series by Eli Schiff, titled The Fall of the Designer, had just been published and was being mentioned frequently by many design professionals I know and follow. Most of these mentions, I should point out, came from people who readily admitted that they had not read much or any of the series, but were clearly upset by the idea the title alone suggests — that the designer’s days are numbered. And why wouldn’t they be? Every day sees some idea threatening the livelihood — if not the very purpose — of someone making its way, like a virus, over the internet, fueled more by the success of its provocation than any actual affirmation it may find in reality. That participating in the marketplace of ideas online has conditioned us to extreme sensitivity, insecurity, and a tendency toward rash reactivity is an understatement. We should know better, at this point, than to be baited by titles as hyperbolic as this one, whether to hand over anything as valuable as our attention, at least, or at most, our credence. And yet, I imagine this series has received a lot of attention. It received mine. And I, too, reacted abruptly, firing off a series of messages on Twitter scoffing at the extremity of it, the so-called “fall of the designer.”
“There is no fall of the designer,” I announced. “There is only permanent transition, which is part of what design is in its very nature. All is well.”
Though I, too, hadn’t yet read Schiff’s series, I was confident it couldn’t be true. Unless, of course, I’ve missed something important that he has not. That question was enough reason for me to give Schiff’s series more of my attention than a few 140-character bursts had cost. So I read the thing. All 16,000+ words of it. And I’m relieved to say that I don’t need to retract any tweets.
But let me get real for a moment. The first two segments — with my initial first impression coming from the title, of course — had me in a perpetual state of OH HELLLL NO. And by the looks of things on Twitter, I was not alone. I was stopping every two sentences to jot down my rebuttal. But by the third section I realized something: This guy doesn’t know. He doesn’t know! This isn’t a serial essay, thought through from start to finish, deliberately structured to slowly and persuasively guide me from point to point, leading me to a grand conclusion, that the end is, in fact, near for the designer. It may sound that way, especially at first. But don’t be fooled. This is the outward processing of an anxious mind. And that’s great news. That is something to which I can absolutely relate. Suddenly the chaos of it all felt familiar. The moments of cogency and thoughtfulness alternating with strange, illogical, red-herring conclusions. Who doesn’t think like that? Especially when afraid? So, Eli, if you’re out there, I get it. Things are scary right now. But you know what? I think it’s going to be OK. And even when it’s not, life is better when you choose to believe that it’s going to be OK. Either way, the future is unknown. So why let the unknown ruin your now by choosing to believe it’s not going to be ok?
So I’m not going to dismantle Eli’s series, point by point. I thought I was going to do that, but to do so wouldn’t be fair to what I’ve come to believe this series really is — an interior monologue, flailing for meaning and stability amidst the shifting sand of our industry. We can choose to believe in the fall of the designer, or we can choose to believe that the designer can remain calm and stand firm. Which I do. And with that said, there are a few points that Eli raises that I want to address. From now on, I’m going to refer to Eli as “the author,” because I’m probably going to get harsh from time to time out of necessity, and it will be easier for me to do that if I put my buddy Eli out of my mind. I’m a lover not a fighter.
The villain, as far as the author is concerned — the trap door through which the designer falls — is flat design. Imagine that. Flat design. Simple, harmless, Eddie Haskell flat design. Gee. Could flat design really be a wolf in sheep’s clothing? Let’s give the author the benefit of the doubt and say maybe.
The author begins by honing in on the crowned jewel of interaction design in the early aughts: icon design. He says that the integrity of design, as illustrated by the golden age of icon design — all five years of it — has been trampled upon by flat design. That flat design is ideologically opposed to the creative expression that used to make icons a joy to make, but has reduced them to an exercise in corporate obedience. He shows example after example of icons which used to look one way — different, unique — and now look another: the same. Or, perhaps more clearly put, at home in the context in which they are used. To which I wonder, is that such a problem?
A good icon isn’t defined by its originality. Or, really — in the context of today’s mobile app grid convention — its ability to stand out visually among many. An icon is good if it clearly communicates what will happen when it is clicked or tapped. That’s really it.
The author shows two variants of the UMass BusTrack app’s icons he designed as representative of what icons could be — ostensibly, that they can be more intricate and/or more unique. Ok, but let’s call a spade a spade. These particular icons — unique as they may be — are not really not that good. They fail the one job the icon is intended to do. Why? Because they’re too intricate. Their busyness makes it difficult to quickly decipher what they represent. It took me too long to figure out that I’m looking at something as theoretically recognizable as a bus. There are so many lines and edges to the image the author created that it looks far more like a labyrinth than a bus. But icons aren’t something we’re supposed to get lost in, are they? Frankly, the flat design version would be better.
But the author doesn’t agree. He says that “the standards for what constitutes ‘good-looking’ have plummeted dramatically.” Setting aside the subjectivity of what is or is not “good looking,” and assuming for just a moment that such a thing is objective, it is worth considering that perhaps the standards for what is “good looking” have plummeted because the expectation for how long any visual convention will last has also dropped. With every new operating system’s release, we expect something fresh. A big change. Some new pretty shinies to play with. But if we’re prioritizing novelty, why wouldn’t we expect good design to suffer?
I do find it interesting that flat design is now being blamed for eroding the value of design with a capital D. Especially when most designers were begging for broader visual standards just a decade ago and would have squealed with delight if anything in the Microsoft, Apple, or Google universes looked like it had taken even the slightest direction from anything designed by Saul Bass. But now that this has happened, it’s the opposite. It’s that design is too homogenous. That there’s no room for creative expression. Perhaps this is true, but I think the designers complaining about this are misplacing the blame.
“Flat” is a design trend that most benefits the main computing ecosystems in which we all work. It’s a means to a corporate homogeneity and ubiquity, which, in the course of its work, has become representative of what looks professional and competent in general. So while the corporate computing ecosystems continue to expand for their reasons, we assimilate for our own. Whether Microsoft, Google, or Apple care that the applications we design look more or less like their environments is debatable (material design documentation notwithstanding), but it’s certainly convenient for them that we care about the graceful integration — down to even the color palette — of our stuff. But ultimately, that we, however subconsciously, submit to that design authority — intentional on their part or not, and I doubt much that it is — is a testament to the consolidating forces of capitalism. We’ve accepted a lack of competition on the ecosystem level. The fact that I cannot even name a Martin O’Malley of computing — that three companies have pretty much locked down the world’s computing, both in terms of software and hardware — is pretty good evidence of that. It’s not a design ideology at work here, it’s economics. A design language has emerged, used both by outside designers and developers as well as those within the corporation, not because Apple or Google have an interest in controlling all application development everywhere, but because they have a very clear interest in developers putting their work up for sale in their marketplaces. And developers and designers participating in that marketplace — specifically, designing things that look more at home there — is not some sort of capitulation to the “prevailing ideology.” No, they’re just trying to get people to use their app. If anything it’s a capitulation to the market.
In order to make the ideology charge stick, you need to both prove that such a prevailing ideology exists to the degree that alternatives to it are being excluded by the powers that be in a sort of design price-fixing scheme, not by the aggregate choices of people like you and me, and that the alternative — all the ugly, clumsy and just all around less-good designs that the author laments have been replaced by soulless flat design — is actually better. I haven’t seen that yet. Not that I can’t imagine what that would be like, though.
But then the question remains, of whether flat design, regardless of where it comes from, denudes an essential creative expression within the act of design. It seems to make that case would require parsing out the balance that exists within design of creative expression, utility, and service in a way which covers all acts of design. A tall order. Is it even worth doing so? To decry a current trend in design because it doesn’t allow for enough creative expression seems completely off-base to me, so long as that conversation continues to be held in the context in which it is: the computing monopolies that I just mentioned. So what that some designers have chosen to iterate toward visual anonymity, slowly but surely turning something that looked unique into something that doesn’t? Nobody forced them to do that, and nobody is forcing us to. But creative freedom, of which we must honestly stop for a moment and acknowledge we have in historically unprecedented abundance, has to be balanced with utility and fashion. If it exceeds something’s usefulness, well, then it’s not going to be used. And if people find something else nicer to look at, then it’s not going to get used. Homer Simpson designed a pretty darn original car which embodies this point rather completely.
Fashion, though, is a metaphor the author himself uses. He suggests that user interface design isn’t enough like fashion, the implication being that designing an icon, for example, should come with the same generous latitude for expression that designing a coat does. But I wonder, how much latitude is there, really, in designing a coat? It must fit the human body; wrapping around a vertically-oriented torso, with extensions for two arms, and a way to fasten it shut to keep cold and moisture away. From a structural standpoint, that doesn’t allow for much “creativity,” does it? Other than its size, length, type of fastener, and whether or not it has a hood, most coats are structurally and functionally the same. Now, they be made of different materials, in different colors, with different patterns, but little of that has major functional implications beyond the subtle (camouflage, sporting materials, and the like). So there’s huge aesthetic range in the act of designing a coat. And there’s a bit less material options. But little to no room to completely reinvent the coat. Nobody’s going to add or remove a sleeve. So how does this apply to an icon? Well, it’s emphasis on utility is certainly the same. You’ve got all the aesthetic freedom in the world, but if you can’t read an icon — if you can’t quickly understand what it represents and what will happen when you click it — then it doesn’t work. It doesn’t matter how many hours you spent making an original and detailed drawing of a bus to go on it.
To further ground this ideology charge, the author quotes Frank Chimero, who writes:
“The web is forcing our hands. And this is fine! Many sites will share design solutions, because we’re using the same materials. The consistencies establish best practices; they are proof of design patterns that play off of the needs of a common medium, and not evidence of a visual monoculture.”
The author is right in not just accepting what he calls a “visual monoculture” as simply the manifestation of what Frank calls “best practices.” Flat design is no more a best practice than it is for a man today to grow a beard, wear dark-rimmed glasses, a plaid button-up shirt, and tight, tapered raw denim pants. It’s fashion. There’s that word again. Fashion is great, but fashion comes and goes. But Frank is also on to something with the idea that the web has “forced our hands.” I think there’s some truth to that, because in light of the ever more complex job that designing for the web has become — because the web is no longer something we experience in the same rectangular box on a desk — finding opportunities for reducing complexity and labor makes sense.
All kinds of things can be done today to push far past the so-called minimalism of flat design. Whether they should be is a matter of context. Whether they will be is a matter of choice. And that choice is always going to be made in light of constraints. Time. Money. So when someone like Frank says that the web has forced us into homogeneity, well, I’m not sure I’d strongly disagree. Because given the choice between being unique and being accessible, I’m going to choose accessibility. And given the choice between shipping on time and on budget and creating a million bespoke versions of my app, well, I think the choice is obvious. To call flat design “illogical” (the author’s words) because it doesn’t take advantage of what can be done now thanks to the many affordances of better CSS, better image types, and better screen resolution is actually kind of illogical, in the same way it would be illogical to say that Warren Buffet is illogical for choosing to live in a modest Colonial style home in the midwest rather than build himself a Taj Mahal. Dude’s got bank, but who are we to say what he should spend it on. Or what he should like. The point is, you can’t really blame people for making a decision that prioritizes getting things done over making a statement, or doing something new, or preserving some high-design integrity. Sometimes there just isn’t time for that. It’s disappointing, and we should talk about what can be done about this, but there’s no point in trying to blame a design trend or any individual(s) for what is essentially the human predicament.
Let’s change the subject and talk about code.
The author cites an industry preference for code as yet another way design is being devalued and threatened. He struggles through what the priority between design and code should be — whether it should be expressed as Design -> Code, or Code -> Design, as if sequence determines value. And while I agree that this struggle is playing out in design and production team workflows everywhere — everyone earnestly trying to get good things made quicker — I think it’s, again, the result of economic forces, not some vast, right-wing conspiracy against design or designers. Like flat design, this phenomenon has more to do, I think, with entrenchment in particular computing ecosystems, and the resulting commoditization of design work through things like frameworks and icon sets. These things exist to ensure stability and increase efficiency, but they rest upon a commitment to other things — a commitment that feels more and more like bedrock (and almost disappears) as more and more people make it. So, at the surface, you’ve got something like Ruby, which makes development lightning fast, but sits on a commitment to MySQL, which is committed to Apache, which is committed to Linux, which is committed to C. And mostly, nobody needs to care about that. But, commitment hierarchies like that aren’t really set in stone. As the usefulness of the outer layers — the frameworks that abstract design and development in exchange for speed and ease — ebbs and flows, so do the fortunes of companies who make software and hardware with their own commitment hierarchies. So long as the next piece of popular hardware is theirs, a company can maintain commitments to computing ecosystems, which of course makes it impossible for designers and developers to do anything other than make the same commitments. If designers really want to take some power back, they’d worry less about the mounting pressure of programming and think more about how the next big piece of hardware might come from someone other than the big four, and how any code-first dogma would collapse as entirely new commitment hierarchies are built.
Of course, I won’t hold my breath waiting for that to happen. And in the meantime, we’re going to have to deal with the infighting between designers and developers, and all the strange dogmas that emerge from it. Like the “No PSD” movement. I won’t defend that in the slightest. It’s rash and silly. But I will say that there are threads within it that are absolutely right on. For example, of course the static nature of Photoshop is a problem. But that’s not because “planning out a design visually is a bastardization of the purity of the code” (the author’s words summing up the supposed perspective of NO PSD adherents). No, this is a straw man. For years, we designed static layouts in Photoshop only to find them sorely incomplete when it came time to interpret them with code. But that’s not because code is a better tool than Photoshop. They do completely different things; comparing them makes very little sense.
Ideas go in to Photoshop and stay there until they are interpreted and re-articulated with code. No problem there. The “purity” — if it exists at all — is in the idea. But if the idea isn’t taking into account all the in-betweenness of use, then its purity is beside the point. It’s glib. Photoshop alone is an inadequate tool for interaction design because it is a tool that excels at visualizing things in a static environment. Snapshots of possibilities. It’s not good at documenting interactions. It isn’t even good at faking them (animations can be played forward and reverse, but they’re little more than an idealization of what could happen). But neither, really, is code. In between what Photoshop is good at (image creation and ideation) and what code is good at (translating structure, logic, and even aesthetic properties into machine language) is this vast, messy world of people and psychology and actions. And that is where Design is, at its heart. Unfortunately, that is also where design is hardest. Which means if you’re going to write a few unnecessarily shrill blog posts about the state of design, you have to reduce design down to something much smaller than it is.
There’s still a place for Photoshop and even mockups, so on that, I’m in agreement with the author. But they can no longer stand alone. They have to work together more closely with code in order to deal well with the increasing complexity of computing and interactions. And that balance and collaboration is an area where, I think, the author has a balanced view:
“It should go without saying that everything designed on computers relies on a foundation of code, and a responsible designer should be aware of the performance limitations of the code they are working with or depending on. Yet, in order for a designer to be wary of the current limitations of code, they need not be an expert programmer, as many in industry would demand. The solution is for greater communication between the two complementary practices.”
Exactly! But not everyone thinks that is enough.
The author goes on mention Bret Victor’s conclusion that designers unable to “ship” anything “as is” are “helpless” and “timid” as a result of not being “self-reliant.” Well, that is just bogus. No one is self-reliant. And to try to be so in the complex environments in which most design problems exist today would be naive and irresponsible. Why anyone with experience would imply that a good designer is “self-sufficient” is beyond me. I’m with the author in repudiating that. But I also doubt that this opinion is held by enough people to be as much of a threat as the author thinks it is.
In fact, the roles one could file under the category of Design are more diverse and complex than ever. Described without using the titles we love to bicker about, Design today involves maintaining a point of view and making plans for the future based upon it, having an aesthetic sensibility and making choices in how it is expressed, understanding functional options and making decisions about which are used, organizing and arranging information, observing use, and gathering feedback and synthesizing conclusions. I’m sure there are others I’ve missed. The point is that none of these roles requires programming skills. However, many are enhanced by an understanding of the programmer’s world. A designer working today in the digital space who doesn’t pursue a deeper knowledge of the technology under the skin is just as unprepared to do her job as an industrial designer who doesn’t understand how aluminum works. End of story. As for tools and titles, good lord, who cares?
As distinct from code, automation is the threat that comes from the existence of software anyone can use to design something themselves.
Perhaps this is where the illusion of self-sufficiency comes from. Within most design teams and organizations that I’ve been able to observe, I have seen no examples of self-sufficiency. But I’ve certainly heard designers boast of having made a website or something “all by themselves.” But then I’m like, yeah, but you used Squarespace. That’s cool and all, but that’s not self-sufficiency. That’s the illusion of it because Squarespace has abstracted virtually all the technical expertise you don’t have and made the only requirement of you clicking a button. So great job.
But is something like the Squarespace a threat? The author worries that it is. In particular, their logo creator tool (which is funny because this is one thing Squarespace has done that I’ve paid absolutely no attention to whatsoever). But it does exist and yes, you can make a logo with it. Is that a threat to designers? Sure. In the same way that a McDonald’s hamburger is a threat to a chef who makes a much better burger with locally sourced ingredients and the meat of a nicely treated cow that costs ten times as much. Is one clearly less than the other — in quality, integrity, value? In my opinion, yes. The McDonald’s burger is about as devolved from what it could be as something like that can get and still be edible. But, there’s always going to be a place for that in the market. Both out of necessity — that some people simply don’t have the means to think of food consumption any other way than lowest possible cost per calorie and it verges on elitist for me to maintain the opinion I do about McDonalds in light of that — and out of nature. On that last point, it is in our nature to choose things, the reasons for which cannot possibly be understood and respected by all. So McDonald’s counts among their customers those who simply cannot afford to eat elsewhere — perhaps even their own kitchens — which is a blight on capitalism and our culture (again, not a design problem), and those whose taste prefers it. Means doesn’t beget taste, nor does lack of means preclude it. Taste is subjective, mysterious, and a major market factor. So, like McDonalds, Squarespace is, perhaps, a threat to a certain strata of opportunity in the marketplace. Those who cannot afford the cost of hiring a competent designer and paying for the time it requires (of the designer and client) to produce a well-designed identity system may, though they recognize the value of that approach, choose to go the five-dollar logo route instead. And someone who has money to burn may do the same thing. This is not a problem that good design, nor one that any amount of designer lobbying (or complaining), can solve. It’s the other edge of the sword of choice.
But the author worries that the influence of automation isn’t just limited to the choice a client has between hiring a designer or doing it “themselves.” I’m sure that automation has influenced the perception of value in the client/designer relationship, but I’m not sure to the degree he thinks it does. He writes:
“Back then , it must have been inconceivable that in a decade’s time clients would be so forcefully urging icon designers to slash prices.”
Again with the icon designers. Ok, well, huh. I actually had a job designing icons in 2001 and was grossly overpaid. So much so that I recall feeling obliged to take a long time to validate what the client was willing to pay — I was just a kid, what did I know? — which of course resulted in overwrought versions of icons I could have just lifted from the existing Windows ecosystem. Of course I never would have done that because that would have been theft. But I wonder. If my save-to-folder icon basically Taco-Belled the one in Windows98 — most system icons back then were just remixes of the same basic elements of folders and arrows — did I really avoid the ethical quandary I thought I had? I mean, I got paid big bucks to spit out unnecessarily unique versions of things that already existed. Nevertheless, just a couple of years later, I realized that would be the last time I felt overpaid for design work. I routinely encountered clients that wanted a logo and wanted the experience of getting it to be like receiving a Krispie Kreme donut off the conveyor belt. Same goes for the price. What’s inconceivable to me is the notion that a lack of understanding of the design process and the resulting price pressure is at all a recent phenomenon or causally related to the 21st century design bogeymen — flat design, code supremacy, or automation. Please. That’s like blaming the internet for the exploitation that makes a UNIQLO t-shirt cost less than $10. Except that The Gap has been doing it the same way by way of Old Navy since before people bought clothes online. The digital part just makes it more convenient. The bargains come off the backs of people living pretty miserable lives in far off places. These disparities have to do with the choices we all face living in a world that is increasingly more affordable for those who can afford more, and increasingly arduous for everyone else. How many designers who justifiably protest the $5 logo keep their UNIQLO t-shirts safely in their blind spot? Right.
To that, the author says, it’s not just the economy, it’s the people.
“It should be unsurprising that clients both large and small react with hostility towards visual designers after the emergence of flat design. Anyone with their eyes open will recognize that material costs for producing design continue to shrink towards almost zero once a designer has a computer, a subscription to a visual design program and a few drawing implements.”
Clients both large and small react this way? Perhaps. But clients large and small always have. It really has nothing to do with material costs. The material costs of designing a logo really haven’t changed much. Even with automation. In fact, one could make a case that they’ve grown. Sure, we use digital tools now, but all told, our digital tools today can be pretty expensive. Laptops and software, if we’re talking about just the things you need to make a logo, aren’t nearly as cheap as pencils and paper. What’s changed are the production costs for identity application. Printing. Signage. Keeping it all under one roof. Markup of all of that — the full-service thing that defines the last century’s notion of what an agency is — was always the agency business model. Price pressure hasn’t come from technology going after the magic of design. The magic has always been hard to price. Price pressures have come from the fact that technology has gone after all the stuff that used to be easily and profitably marked up. What this most immediately threatens is the possibility of making a logo design job for a client that doesn’t need much application a profitable gig. If all it’s ever going to be is a JPEG in the header of a website, then that significantly reduces the scope of the problem. And yes, the cost, which, if you don’t count the not insignificant investment in a laptop and Adobe Creative Suite or whatever, could be thought of as approaching zero.
Still, someone who values design might be willing to pay a value-based price for something truly unique and intelligently conceived. But keeping a business afloat with those beautiful unicorns as the core clientele is going to be quite challenging, because they’re rare, especially in light of the many purveyors of the $5 logo. On the other hand, going for larger clients has its own set of challenges, not to mention those of actually doing the work they need — designing identity systems that span a wide and diverse set of surfaces and contexts. That work cannot approach zero, and that has nothing to do with the computer and software a designer uses. It has do do with the massive increase in complexity, time, and material costs a project like that requires. That hasn’t changed much and isn’t likely to regardless of how much longer flat design is en vogue.
What I find most strange in the author’s assessment of these issues is that he goes on to compare designers who believe they are “immune to the impact of flat design” with designers who thought they were immune to the impact of digital publishing. Perhaps designer obstinance today and then is comparable. Obstinance is obstinance. But is the contemporary trend of flat design truly comparable to the paradigm shift from print to digital publishing? Hardly. Again, at the root of all of this is economics. Not just that digital tools are cheaper than analog ones. Nor that digital processes are more efficient — and thus cheaper — than analog ones. In some cases, these things are obviously true. But for them to be the sole culprit would mean that all previous demand for design services have come to find today’s fast and cheap supply sufficient. That is not the case. Today there is new demand for design services. There are more buyers than in decades past. Which opens up lots of opportunity for supply — mostly at the low end, but also at the high end. After all, I partly reacted with surprise to this idea that today’s digital designers who feel confident in the shadow of the supposed flat design juggernaut are just as glib as yesterday’s print Pollyannas because there are plenty of designers doing quite well today who clearly don’t get digital at all! Some of today’s most celebrated designers operating out of our most iconic and revered studios continue to be handed the keys to big, equally iconic brands, yet deliver work that clearly falls apart — both practically and conceptually — once it leaves the safety of the printed page. You know who I’m talking about, so I’ll leave it at that. But hey, plenty of “big” clients are still paying BIG money for this incompetent, anachronistic work. The internet hasn’t changed that; I doubt flat design will, either.
Ah, the uncertainty of it all.
Where Do We Go From Here?
The author eventually levels-off with a discussion of how technological and economic forces have, all told, removed certainty and stability from the careers of most designers. And to be fair, he also (finally!) acknowledges the tenuous connection between flat design and much bigger things like technological change and market forces:
“It is true that the two are not themselves related, but the quality of design they inspire is quite similar. Moreover, the expectations they set for what qualifies as good design in the minds of clients are similar. Clients cannot be blamed for drawing improper conclusions about the value of design when it is commoditized.”
It’s almost as if, over the course of these four parts, he has worked through this issue in part, but not in full. After all, he began with clinging to the artisanal craft of late-90s icon design, almost defining design as a creative prerogative, then transitioned to lamenting the apparently hopeless plight of design in light of the operating system and responsive design, and finally emerged calling for something I can absolutely advocate: a bigger and longer view of the role of design in the world, and more proactivity on the part of working designers. I’m glad that happened, but perhaps a less definitive title for this series would have been in order. Like, I dunno, “I Don’t Know What’s Happening But Am Going to Work Through It Until I Figure Something Out.” That certainly would have offended fewer designers.
The issue of infighting among designers as a whole — over what should be done, who should do it, and what we should call one another — is frustrating. It only increases entrenchment and gridlock. But it’s the result of things getting ever more complicated and the responsibilities of the “Designer” ever more overwhelming. That specialization would be required of a designer in the midst of all of this should be no surprise. But it’s not going to be easy, as specialization requires that we let go of some things we would like to continue doing (and mostly, controlling, because this is often more about love of control than love of the work) and perhaps take on things that we’d rather not do. And even more challenging to our egos, cooperate with those who are new to the field thanks to the opportunity all this complication has created.
Too many critiques of what design is or should be, or how it functions in the marketplace or should, speaks more to the org chart with which the critiquing designer is most accustomed than any meaningful truth about design at large. Or, in other words, most design dogma speaks more to the fears of the preaching designer than any meaningful truth about design at large. Our isolated, subjective views of the world distort the greater truth, and the only fix for that is to get together and figure this stuff out in community. We’re on our own, together.
Web design hasn’t sat still long enough to give us a Saul Bass or a Paul Rand or a Milton Glazer or a Charles and Ray Eames. We lack digital design heroes to show us the way. We have Moore’s Law to blame for that. Every tech shift has demanded a reaction from graphic design. But our 20th century print design heroes had centuries of stability before them which supported observation and mastery. We haven’t had that, and we’re certainly not doing a great job having the patience to get it. So let’s give ourselves and each other a break.
Between the author and all the people he quotes, a wonderful and deep conversation about the nature of design in this world could happen. But instead, because of some need to plant a flag of opinion with a capital “O,” we get a tangled mess of words ripped out of context and pitted against one another in order to support what is ultimately a cry of fear. Of course things are scary. That’s not new! Design is scary. To design is to form with intent. Or, in other words, to bring order to disorder. And so we can choose to fall in to disorder, or to remain calm and stand firm. Who’s with me?
Heavy Rotation: I’ve really been enjoying BBC Radio4’s Coast and Country podcast. If you like field recordings of walks through quiet places narrated by soft, English voices, then this is for you.
Recent Tabs: This first one came by way of my stepdad (who reads this newsletter sometimes; hi!): Compulsive behavior sells, which makes me sad, even though I acknowledge it to be true. I even left the article partway through to look at beach houses on Expedia.com, which the article goes to great lengths to mention is really good at sucking in and addicting pathetic web browsing sapiens just like me. The Internet Archive got a redesign. “As Nepal collapses and Baltimore burns, we are still able to do little more than document it.” Peter Stoyko’s latest is good. How many decisions does a person make in a day? Beyond interaction. The robot alternate universe where everyone watches Friends alone.