Knowledge Work, AI, and Fragile Egos

On the connection between time, skills, ideas, and what makes us truly human.

Knowledge workers advance through economies by transitioning from selling time to selling labor to selling IP. The notion here is that narrowing the funnel increases the value: everyone has time, fewer people have skills, and the fewest have ideas that are novel and well-timed enough to be in demand.

AI, in any of its forms — current or future-speculative — disrupts this path.

Time

The architecture that sustains what we know as AI is built upon a simultaneity of processing that, even at its most rudimentary, exceeds the human brain. In essence, a “mind” that can process more things per second has more seconds.

Everyone has time, but an AI will always have more.

Skills

Large language models enable AIs to be very good at the tasks that comprise most of knowledge work. Staying informed is at the foundation. An AI can do this better than a human: it can absorb more information faster, and more importantly, it can recall it all faster and more accurately. Knowledge workers with high input capacities willing to burn the midnight oil reading and typing — building up for themselves nice little knowledge kingdoms/consultancies — will not be able to compete with the reality that their prospects can now just ask the AIs themselves. (This makes me realize that, in the short term, many of the kinds of work that have been outsourced to consultancies will likely be re-absorbed by smaller, in-house teams depending heavily upon AI.)

While it’s been central to the AI/employment conversation that those who make their livings as writers are most immediately vulnerable, we aught to widen our scope to anyone whose knowledge work is represented by the written word. Independent consultants of all stripes are on-notice.

Ideas

Then we come to ideas. Can an AI have ideas? Your answer will depend, first, not upon what you think an AI is, but upon what you think an idea is. An idea has traditionally been defined as a thought — almost as a base unit. You can say to yourself, “I have an idea.” You probably wouldn’t say that of a thought that pops into your head like, “I’m hungry,” but you might of one like, “Hunger seems like a biological limitation sometimes; maybe there’s a way to control or delay it without doing harm to the body.” So, there’s an implied novelty and complexity to ideas. They can be intrinsically valuable; an idea about controlling hunger could benefit a person without ever being expressed to another. They can also be objectively valuable. What if that idea turned into a solution that could be repeated by anyone?

Knowledge workers truly excel when there is just the right balance of novelty and repeatability to their ideas. Their clients recognize how much they need that idea and are willing to pay the consultant to help them understand and apply it. Can an AI do this? Probably.

For a moment, consider where AI could offer the most immediate, objective benefit to humanity. One area would be in the sciences. Scientific advances require an enormous amount of time, calculation, and analysis. The greater the time, the greater the calculation, the greater the volume of data. An AI can calculate and analyze faster than humans. If AI doesn’t help us solve chronic problems — like disease, for example — it will be a great waste. But if we think an AI could cure cancer, why couldn’t it cure many of the other problems we assume require human ingenuity: resource management, operations, communication, and so on? An idea, after all, is only novel if it hasn’t yet been understood. That says nothing of its origin.

For all we know, every idea already exists and the difference in its effect upon reality is completely undetermined by who or what makes it known to others. Now that’s a spooky thought, but it certainly isn’t a new one. Socrates, Plato, and Aristotle would have plenty to say — and do — in the age of AI.

Human Egos

But where this leaves us — human knowledge workers — is at an interesting point. It’s not quite the climax of our usefulness, but it probably is the point where we can see it coming. Like many inevitabilities, the emotional impact is now, even if the cause comes later.

I’ve seen knowledge workers at the end of their careers who cannot let go. People ready to pick up where they left off are held back, and the older generations cling to the knowledge they sold as if it was their DNA. Put simply, most of us believe that a ideas make us human, but good ideas make us special. We’re probably on the verge of finally learning that’s not true. The ego will not like this, but I suspect it will be good for all of us. It will be one of many things that helps us better define what it truly is to be human. Being special is not one of them.



Written by Christopher Butler on May 23, 2024,   In Essays


Next Entry
Link – Rebecca Toh's Personal Website rebeccatoh.co is a really nice personal website.
Previous Entry
visual journal – 2024 May 11 - May 19 SimPocalypse (12 images)

⌨ Keep up via Email or RSS
Impressum
© Christopher Butler. All rights reserved