Why Embodied AI Is the Red Line We Cannot Cross

We must not give AI bodies. If the researchers who created AI are right, our future existence depends upon it.

If you have been keeping up with the progress of AI, you may have come across the AI 2027 report produced by the AI Futures Project, a forecast group composed of researchers, at least one of whom formerly worked for OpenAI. It is a highly detailed forecast that projects the development of AI through 2027, then splits into two different trajectories through 2030 based upon the possibility of strong government oversight. To summarize: oversight leads to an outcome where humans retain control over their future; an international AI “arms race” leads to extinction.

The most plausible and frightening assumption this forecast relies upon is the notion that private financial interests will continue to determine public policy. Ultimately, the AI 2027 story is that the promise of short-term, exponential gains in wealth will arrest governmental functions, render an entire population docile, and cede all lands and resources to machines. This is plausible because it is already happening. The AI 2027 report simply extrapolates this pattern. I find it disturbing now. The greed + AI forecast is horrifying. But it also depends upon another assumption that I think is preventable and must be prevented: robots.

The difference between human autonomy and total AI control is embodied AI. The easy way to envision this is by ambulatory robots. If an advanced AI — the sort of self-determining superintelligence that the AI Futures Project is afraid of — can also move about the world, we will lose control of that world. An embodied AI can replicate in a way we cannot stop. We cannot let that happen.

Now, it may be that the AI Futures Project is unreasonably bullish on the AI timeline. I’d love for that to be true. But if they’re not — if there’s even a chance that AI could advance to the level they qualify as “superintelligent” — exceeding our depth, speed, and clarity of thought — then we cannot let that out of “the box.” We must do everything in our power to contain it and retain control over the kill-switch.

This is pertinent now, because there has already been a sweeping initiative in DOGE to hand over government systems to AI. DOGE players have a vested interest in this, which ties back to the foundational corruption assumptions of the AI 2027 forecast. They want AI to run air traffic control, administer the electrical grid, and control our nuclear facilities. These are terrible ideas, not because humans are always more reliable than machines, but because humans have the same foundational interests as other humans and machines do not.

The most dire outcome forecast by AI 2027 results from a final betrayal from the machines: they no longer need us, we are in their way, they exterminate us. A superintelligence that wants to stave off any meaningful rebellion from humans who finally get a clue and want their planet back will first gain leverage. We shouldn’t hand it to them!

We need an international AI containment treaty. WE need it now. It is even more urgent than any climate accord. A short list of things we should include in it are:

  • AI is not a traditional product. It requires novel regulation. Government policy must “overreach” compared to previous engagement with the free market.
  • Infrastructural systems should not be administered or accessed by AI. This includes electrical grids, air traffic systems, ground traffic systems, sanitary systems, water, weapons systems, nuclear facilities, satellites, communications. This is not a complete list, but enough to communicate the idea.
  • AI must not fly. AI must not be integrated into domestic or military aircraft of any kind. Any AI aircraft is an uncontrolled weapon.
  • AI must not operate ground vehicles. Self-driving cars operated by today’s AI systems may present as safer and more reliable than human operators, but a superintelligence-controlled fleet of vehicles is indistinguishable from a hostile fleet. Self-driving civilian vehicles and mass-transportation systems must fall under new and unique regulation. Military vehicles must not be controlled by AI.
  • AI must not operate any sea vehicles. Same as above.
  • AI must not be given robot bodies. Robotics must be strongly regulated. Even non-ambulatory robotic systems — like the sort that operate automobile assembly plants — could present a meaningful danger to humanity if not fully controlled by humans. The linchpin of the AI 2027 report is an uncontrolled population of AI-embodied robots.

Much of the endstage forecast of the AI 2027 report reads like science fiction. In fact, the report itself categorizes concepts as science fiction, but as time progresses, all of them move into what the research team considers as either currently existent or “emerging tech.” In other words, like all science fiction, the sci-fi tech becomes established — that’s what makes for science fiction worldbuilding. But for now, it’s still science-fiction. The mistake, though, would be to conclude that therefore, its narrative is implausible, unlikely, or impossible. Nearly every technological initiative of my lifetime has been the realization of something previously imagined in fiction. That’s not going to stop now. Too many people are already earning too much money creating AI and robots. That will not stop on its own.

In the early aughts, my timeline was often filled entirely by the serious concerns of privacy Cassandras. They were almost entirely mocked and ignored, despite being entirely right. They worried that technology created to “connect the world’s information” would not exclude information that people considered private, and that exposure would make people vulnerable to all kinds of harms. We built it all anyway, and were gaslit into redefining privacy on a cultural scale. It was a terrible error on the part of governance and a needlessly irreversible capitulation on the part of the governed. We cannot do that again.



Written by Christopher Butler on
June 5, 2025
 
Tagged
Essays