Reality Doubt
AI-generated media is accelerating very, very quickly.
OpenAI has announced Sora, an AI model that can generate up to 1-minute of video from a single text prompt.
The results are impressive.
As I look through them and compare what Sora is capable of with the AI-generated videos from less than one year ago, I have one persistent thought: we’re about a year away from being blindsided by Turing-test media. Someone is going to generate something that is indistinguishable from a familiar, human-made thing, and the audience will be shocked to find out it is not what they thought it was.
Imagine being most of the way through a video from your preferred YouTuber only to find out it’s not really them — that the whole thing has been generated by AI. Or imagine hearing the podcast voices you know best and hearing them tell you it’s not really them. One year away, I think. And that event will be a premeditated stunt.
As good as they are, even these Sora videos have obvious tells — you don’t have to look that hard to find them. One video (Sora video 1) depicts a woman strolling through Tokyo at night, and whatever is going on with her legs is… upsetting. But if the fidelity of this sort of thing can go from deep-dream spaghetti Will Smith to pretty-darn-realistic skin and facial hair of a spaceman (Sora video 3) in just a matter of months, the chart is exponential, not linear.
Prepare to be surprised.