â
What is real? That is going to be question of the future. Not because weâll have the cognitive surplus to consider those questions we left behind in the smokey, tapestry-draped, blacklit dorm rooms of our youth, but because daily experiences, so subtly technological as to look and sound and feel as natural as, well, nature, will provoke us to ask.
Imagine, for instance, visiting the pyramids. Iâve never been there, but Iâve imagined it countless times. When I was a child, a book of black and white line drawings could take me there. Now, I can âgoâ there with Googleâs StreetView, which, I must say, is pretty incredible as far as experiences I can have while sitting at my desk are concerned. But as excellent as Googleâs surrogacy is, it can only get me so close. Iâm still a few meters from the stones themselves. What if I want to get closer? A couple clicks of the zoom donât quite do it; the image gets bigger but it also gets blurrier.
How about the Mona Lisa? Imagine visiting the Louvre, and standing before Leonardoâs masterpiece. Now thatâs something Iâve actually seen, and, unfortunately, it looks much more like this than what youâre probably imagining. Most of what I saw on the day I visited the Louvre I saw through the glowing rectangle of someone elseâs camera screen. I waited a long time to get closer, but standing a few meters from the painting â which is only two-and-a-half feet tall and hung behind a thick panel of bulletproof glass that reflects every flash of the hundreds of cameras pointed at it constantly â I wondered, why bother? A Google image search, honestly, delivers a better experience of viewing this particular painting than actually being in the same room with it does. But, being here, in my office, an entire ocean away, is certainly not a substitute for being in Paris. Nor is StreetView, though it gets me closer.
Now imagine standing by the pyramids again, or, if you prefer, in the Mona Lisa gallery at the Louvre. This time, youâre wearing glasses that let you zoom in, close enough to see the texture of the stones at the base of the pyramid, or Leonardoâs brushstrokes. Farther and with greater detail than you could see with your naked eye, even through the things and people that stand in your way. This is possible today. Almost ten years ago, Microsoftâs Live Labs demoed Photosynth, an imaging technology that âstitchedâ together existing photos of well-known landmarks, creating a virtual space that you could explore through a computer screen. Couple that with the Oculus, and you basically have it. Add a decade or two to the equation, and perhaps you can do it without wearing a big, black, plastic box strapped to your head. Maybe weâll iterate from something like Google Glass, which still lets us be functional â albeit dumb-looking and obnoxious â humans in the physical world, to contact lenses, to something embedded directly into our brain. My guess is weâll skip the contact lenses, though. Thereâs probably only so small a camera can get, and weâll have an easier time convincing the brain itâs seeing something it isnât than shrinking a camera down to the point where it doesnât hurt to blink over. So, itâs probably going to happen. But hereâs the question: are you really seeing what youâre seeing? Does it matter? In this case, probably not. That the super high-res image of the Mona Lisa is not the actual Mona Lisa is not going to matter one bit to you when the actual Mona Lisa is buried under fifty tourist heads and iPhone screens. Youâll get that the image youâre seeing was taken by somebody else at some other time, but the trick will be good enough for your brain, and after all, youâll still be standing in Paris. The best of both worlds, right?
But what if, while youâre standing there gazing through skulls and screens at your Pseudo Lisa, you suddenly hear the raspy voice of a man in your head â Leonardo himself, telling you about what it was like to paint the Mona Lisa hundreds of years ago? Itâs a museum directorâs dream. A completely immersive experience. And it, too, is possible today. Forget those handheld players with earphones, or the iPhone apps with guided tours. Iâm talking direct to your brain. All you need is a good script, a good actor who can do Leonardo in twenty different languages, and, oh yeah, a sonic beam. But itâs been done. You may remember Holosonicâs audio spotlight technology that was used to project a focused âbeamâ of sound from a SoHo billboard for the Paranormal State television show directly into the heads of unwitting passers by. People were pretty freaked out by that. Maybe you also heard about the Talking Window demo that used âbone conductionâ technology to release high-frequency oscillations that the brain converts into sound. Some people were freaked out by that one, too, but not because of the whole hearing-voices-in-your-head thing, but because that particular implementation required that your face actually touch the grubby window of a public train. But again, standing there at the Louvre, studying the Mona Lisa in greater detail than the eye could ever grant, with Leonardoâs soliloquy in your head, âis it all real?â is a meaningful question to ask. You know it isnât, but how many experiences like this would you need to have throughout your daily life before it simply didnât matter anymore? These kinds of enhancements and augmentation canât be expected to be limited to just entertainment and tourism. After all, two of the working examples Iâve already mentioned are for advertising. So yeah, throw a little Tupac hologram into the mix and you can expect to have Steve Jobs himself tell you why you should buy the iPhone 11 while youâre standing at the Apple Store in 2020. Too soon? Please. You canât expect any company to be respectful of the dead when thereâs money to be made.
It goes deeper still. Technology will augment experiences by adding things to it, but it will also do so by taking things away. Thatâs what the Active Listening project is all about. After a wildly successful Kickstarter campaign, theyâre well on their way to delivering wireless earbuds that will let you âoptimize the way you hear the world.â Specifically, by filtering out the stuff you donât want to hear. A neat idea, sure. And certainly fascinating in the way in which we can pinpoint particular needles in the haystack of audible frequencies. But, to what extent is the collage of sounds â some harsh, some annoying â a necessary and good part of living in the world? And is removing things you donât like an optimal way to experience it? Yes, the early adopter will be the douchey business class traveler who just canât bear to hear that whining brat in coach shrill over the civilized clinking of his cocktail tumbler. But what of when it finds its way to the rest of us? Might sound filtering be dangerous? What if filtering out the high register of your neighborâs alarm clock also filters out the sound of your buildingâs fire alarm? What if filtering out traffic puts you in front of a Mac truck because you didnât hear it coming? Perhaps weâll figure all that out. But we are still left with the same question: is the silence of your flight real when youâve filtered out all the sounds you donât want to hear? Does it matter, so long as you are the one in control of the filtering?
There are plenty of other examples of technologically additive and subtractive experiences, but theyâre not just limited to sight and sound. Even taste is hackable. This VR headset, created by Japanese researcher Takuji Narumi, can alter the image of food being eaten by its wearer â making it larger or smaller, for instance â while the six tubes connected to it can release strong smells that, matched to the image, can completely change a subjectâs perception of taste. Narumi intends his device to have a variety of uses, including weight loss and hospital rehabilitation. Clinical trials are underway with a group of longtime Soylent devotees whose palettes are at a âzero pointâ having only tasted gruel for the last few years. Just kidding on that last part, but hey, someoneâs gotta bankroll this thing and why not start with Valley richies who have already demonstrated an enthusiasm for living like a robot? Theyâre gonna love this just as much as those Active Listening buds. But as easy as it is to mock those who will surely be the first to enthusiastically use these kinds of technologies, the question of their impact on how life is experienced will trickle down just as the application of these technologies does. And with all of these technological enhancements rewiring our brains, itâs a sobering thought that perhaps we wonât remember what it was like before them, anyway.
All of these technologies put us in an altered state. Of seeing and hearing and even tasting things that arenât there. So what of reality? Are any these things so different from walking about the world wearing earbuds? With a perpetual personal soundtrack that has nothing to do with the places you go other than that you and your device are there? Though they may be smaller, less visible, and more perceptually rich, they are still fundamentally about altering our environment â something we do in countless and subjective ways today. So why does it feel different? The filter bubble, as initially coined, was the unintentional result of the datamined social networking experience, but what happens when we intentionally create filter bubbles of our own that follow us everywhere we go? How loose can the weave of the fabric of society get before it no longer holds together on the basis of shared experience? And how many people will have been driven mad in the process of augmenting our experience? The lack of a definitive what-is will only contribute to a proliferation of alternatives, some more harmless and isolated than others, some widespread and crazy (see Project Blue Beam).
What is real? And how many people have to experience it for it to be so? Thatâs the question, isnât it? If this kind of technology becomes pervasive enough, then the question of what is really there becomes much more difficult to answer, doesnât it? So much of reality is the combination of subjective experience and cultural agreements about the meaning of shared subjective experiences. If displaced experiences â whether as benign as âsupersightâ from a tourist path aside the pyramids at Giza, as subversive as personal filter bubbles, or as manipulative as psyops warfare â become the norm, then reality itself will become much more complicated to interpret. Reality is often defined self-referentially; itâs what is, as opposed to, say, what could or should be. To expand the vernacular to include technological qualification, as in to more narrowly define reality as that which is unmediated or uncreated by technology, is, at this point in human development, impossible. A future in which a new layer of experience â ungrounded, unwired, but fully sensory &dmash; is a daily reality is inexorable, just as a walk in the park uninterrupted by a buzzing in your pocket or a glance at someone elseâs screenglow is today. Ubiquity is, as Kevin Kelly so aptly put it, what technology wants. Not necessarily ubiquity of objects of technology, but ubiquity of signal; experience of the technological kind. Every technology is a string of reality, within which is an entire world of experience, provided one simply look or hear or feel. But how will we find our footing on the shifting-sand reality of truly ubiquitous technological experience? I wonder.
âŹ
Heavy Rotation: Sparks by Imogen Heap, which, somehow, I didnât hear about  f o r  a n  e n t i r e  y e a r. What?! It was even featured on First Listen, which Iâm usually all over. Anyway, finding it has been like the musical equivalent to finding that five-dollar bill from last year wadded up in your winter coat pocket, except Iâd say itâs worth way more than five bucks. And since weâve been talking about technologically mediated experiences vs. experiences that are inherently technological, Sparks is a perfect sonic accompaniment to that conversation.
âŸ
Recent Tabs: One million miles from here, just a tiny bit along the way to the Sun, a camera mounted to DSCOVR sends 11 photos of Earth back to NASA every day. In other evidence of the-Earth-is-amazing, check out this video tour of the Lowline, the worldâs first underground and sunlit garden. I have no idea how I missed Imogen Heapâs musical gloves demo, but man, Iâm glad I saw it eventually. She is inspiring. So is Marian Bantjes. Yaleâs new website is pretty nice. Meanwhile, scientists are trying to use a drug called rapamycin to extend the lives of 20 dogs in Seattle. Itâs worked in preliminary tests on mice, but an interesting side-hypothesis presented in the article is that mice commonly live for about two years, so they may have more âroom for improvementâ than longer-lived species. In any case, I and my pup support this research. As apposed to the letâs-mutate-our-dogs approach of these Chinese DNA-edited superdogs, a fresh hell that is, sadly, much further along. Iâm sure this bodes well for the planet and wonât end in some Jurassic Park-like disaster. ââŠmost startups claiming to promote the sharing economy are really just neoliberal extravagances that will further enrich the smartphone-toting white elite.â Finally, if you must indulge your Back to the Future Part II nostalgia a bit more, watch this clip, which will explain the deeper symbolic truths of the film, sheeple!