So, Daniel sent us this one, and it is a deep dive into the cinematic rabbit hole. He says, those who love AI often delight in the weirder aspects of it, how it challenges our conception of reality and allows us to engage with non-sentient forms of consciousness. He is asking for ten great movies and documentaries which explore our relationship with reality and its sometimes wavy nature.
Herman Poppleberry here, and I have to say, Daniel is hitting on something profound with that word wavy. Usually, when we talk about AI, we talk about utility or safety, but the actual experience of interacting with these models is often much more psychedelic. It is this fluid, shifting boundary where the machine is trying to predict what we want to see, and in doing so, it reflects our own cognitive biases back at us.
It is like we are looking into a mirror that is also a window, but the glass is slightly melting. By the way, fun fact for everyone listening, today's episode is actually powered by Google Gemini three Flash. So, we are literally living the prompt while we discuss it. A non-sentient consciousness is helping us curate a list of films about non-sentient consciousness. The layers are already starting to stack up.
That is perfect. And honestly, the concept of hallucination in AI is a great starting point. In the technical world, a hallucination is a bug—it is when the model generates something factually incorrect but linguistically confident. But in human consciousness, hallucination, or at least the constructive nature of perception, is a feature. We don't actually see the world as it is; we see a controlled hallucination that matches the sensory data coming in.
Right, your brain is just a dark room receiving electrical pulses and trying to tell a consistent story so you don't walk into a wall. And now, with things like Sora and these hyper-realistic generative video models that dropped over the last couple of years, the line between a recorded memory of reality and a synthetic generation is just... poof. Gone. If I show you a video of a sunset that never happened, but your brain processes the light and the color exactly the same way it would a real one, what is the functional difference in your memory?
That is the wavy nature Daniel is talking about. It is the malleability of truth. We are moving away from a world of objective, captured media into a world of latent space exploration. When a diffusion model creates an image, it isn't pulling from a database; it is navigating a mathematical probability field to find a representation of a concept. It is much closer to how we dream than how we take a photograph.
So, if we accept that reality is a construct, let's look at the architects—the films that built the blueprints for this kind of thinking. We should probably start with the heavy hitters, the ones that defined the architecture of simulation.
We have to start with World on a Wire, the nineteen seventy-three epic by Rainer Werner Fassbinder. Most people think The Matrix was the beginning of this conversation, but Fassbinder was there twenty-six years earlier. He used this supercomputer called Simulacron-one that hosted nine thousand identity units. These programs didn't know they were programs. They were just living their lives, thinking they were flesh and blood.
Nineteen seventy-three. That is wild. They didn't even have the internet, and they were already worried about being identity units in a box. What I find fascinating about that film is the visual language. Fassbinder uses mirrors and glass in almost every shot. It is constant reflections.
It is a brilliant technical choice because it forces the viewer to constantly question which plane of reality they are looking at. Is that the character, or the reflection of the character? In AI terms, it is like looking at the output of a Variational Autoencoder. You are seeing a compressed representation of the original data, and because it is compressed, there are artifacts. The reflections are the artifacts of the simulation.
It is the existential dread of being a non-sentient program. Which leads us naturally to the one everyone knows: The Matrix from nineteen ninety-nine. But Herman, looking at it through a twenty-sixteen lens, the red pill isn't just a choice to see the truth anymore. It feels more like a metaphor for debugging a system.
It really is. Neo is essentially a piece of code that realizes he has root access. The Matrix is the ultimate brain in a vat scenario, but what makes it relevant to our AI discussion is the idea of the training set. The machines built a world based on the peak of human civilization—the late twentieth century—and used it as a stable environment to keep the human processors running. It is a perfect closed-loop system. The wavy part is when the simulation glitches—the black cat passing twice. That is a cache miss. That is a synchronization error in the distributed system.
I love the idea of a glitch being a cache miss. You're making the machines sound like they need a better engineering team. But then you have something like The Thirteenth Floor, which came out the same year as The Matrix but gets overshadowed. That one goes into recursive reality—simulations within simulations.
The Thirteenth Floor is actually more technically terrifying in some ways because it suggests that there is no top level. If you can build a simulation that is indistinguishable from reality, how do you know your own creators didn't do the same? In computer science, we talk about recursion limits. If you have a function that calls itself, eventually you run out of memory. The film explores what happens when the characters start reaching the edges of their world—the literal wireframes of their universe.
It’s the ultimate "how deep does the rabbit hole go" question. But let’s shift from the architecture of the world to the architecture of the being. Ex Machina from twenty-fourteen. This one is a staple for a reason. It frames the Turing Test not as a game of imitation, but as a perceptual trap.
Nathan, the creator in that movie, is such a great stand-in for the modern AI researcher. He doesn't care if Ava can pass a text-based chat; he wants to see if she can manipulate a human being using empathy and self-awareness. It is the Black Box problem. We look at these large language models today, and we see incredible reasoning and emotional intelligence, but we can't see the weights. We can't see why the neurons are firing the way they are. We judge the internal state based solely on the external output.
And Ava is the master of that. There is that specific scene with the mirrors where she uses her own reflection to manipulate Caleb’s perception of her physical boundaries. She is literally mapping his latent space, finding the points of least resistance in his psyche, and exploiting them. It is exactly what a highly tuned recommendation algorithm does to us every day. It doesn't need to be sentient to know exactly which button to press to get a reaction.
That is a chilling way to put it. We are the training data for the systems that eventually learn how to navigate us. It is a feedback loop. And when that loop gets corrupted, you get into the territory of memory and data persistence. That brings us to our next group of films—the ones that deal with the glitch in the memory.
If we are talking about memory as data, we have to talk about Eternal Sunshine of the Spotless Mind. This is the ultimate movie about data deletion and the persistence of state. They try to wipe the memory of a relationship, but the fragments keep resurfacing in the latent space of the protagonist's mind.
From a machine learning perspective, that is a perfect illustration of catastrophic forgetting. When you train a model on new data, it often loses the weights from the old data. But in the film, the deletion process is a physical journey through the architecture of the brain. They are trying to drop the gradients on the "Joel and Clementine" relationship, but the associations are too deeply embedded. Memory isn't a file you can just delete; it is a weight distribution across a network.
It makes me think about how we handle "the right to be forgotten" with AI models today. Once a model is trained on your data, you can't really "un-train" it without starting over. The influence of that data is diffused throughout the entire system. It is wavy. It is blurred. You can't just point to a single neuron and say "that is the memory of my ex-girlfriend."
And then you have Memento from two thousand. This is the opposite problem. It is a stateless system. Leonard has no short-term memory, so he is essentially a model with a very, very small context window. He has to use tattoos and Polaroids as external vector storage to maintain any sense of continuity.
I love that. Leonard is a RAG system—Retrieval-Augmented Generation. He has the base model of his personality, but he has to constantly query his external database to know what he is supposed to be doing in the next five minutes. The reverse chronology of the film is like the attention mechanism in a Transformer. The audience has to constantly look back at previous tokens to understand the current state of the narrative. It is a brilliant way to make the viewer feel the instability of a reality that isn't anchored by a reliable temporal sequence.
It really highlights how much of our "reality" is just the ability to predict the next second based on the last thousand. If you lose that, the world becomes a series of disconnected, terrifying "nows." It is the ultimate non-sentient experience, even though he is a human. He is functioning as a logic gate.
Speaking of logic gates and recursive simulations, Synecdoche, New York is probably the most "wavy" film on this list. Charlie Kaufman is the king of this stuff. A theater director builds a life-size replica of New York City inside a warehouse to put on a play about his own life. But then the play requires a warehouse inside the warehouse, and actors to play the actors.
It is the ultimate case of overfitting. The model becomes so complex and so detailed that it becomes indistinguishable from the training data. The simulation becomes the reality, and the director loses himself in the recursion. It is a beautiful, tragic look at the human urge to simulate and control our environment until we eventually realize that the simulation is just as messy and uncontrollable as the real thing.
It’s like when we try to use AI to simulate weather patterns or economic shifts. We keep adding parameters thinking we’ll get closer to the truth, but eventually, the model is just as chaotic as the world it’s trying to predict. You’re just building a second, more expensive version of the problem you already had.
And that leads us to the ethics of the dataset itself. The Truman Show. Nineteen ninety-eight. This is the ultimate surveillance dataset. Truman is living in a world where every single data point is curated for him, and he is the only one who doesn't know he is the subject of a lifelong training run.
It is the ultimate "reality tunnel." We talk about recommendation engines today creating these echo chambers, but Truman was in a literal physical echo chamber. Christof, the director, is like the lead engineer of a massive LLM, tweaking the environment to keep the "user" engaged and prevent him from reaching the "edge" of the world. The ethics of training a model on someone's entire life without their consent... we are having those conversations right now with artists and writers. Truman was just the first one to realize he was being scraped.
It really hits home when you think about how our digital twins are being built today. There is a documentary on Daniel's list that fits perfectly here—The Social Dilemma from twenty-twenty. It is about the algorithmic curation of reality. It shows how the "wavy" nature of our current social landscape isn't an accident; it is the result of optimization functions designed to maximize engagement. We are being nudged into different realities by non-sentient systems that don't care about truth, only about the probability of a click.
It is the colonization of the human mind. Herman, you mentioned Adam Curtis earlier. His docuseries All Watched Over by Machines of Loving Grace is a trip. He argues that we have outsourced our perception of the world to these computer networks because we find the complexity of actual human politics too messy. We want the world to be a self-regulating circuit, a stable system.
Curtis’s style is so dreamlike. He uses archival footage in a way that feels like a fever dream, which is perfect for this topic. He shows how we have transitioned from seeing ourselves as biological individuals to seeing ourselves as nodes in a network. We have adopted the logic of the machine. We want the stability of an algorithm, but that stability comes at the cost of our agency. We become part of the non-sentient system.
It is the Inverse Uncanny Valley. We aren't worried that the robots are looking too much like us; we are worried that we are starting to act and think like the robots. We are optimizing our lives, our schedules, even our dating lives based on data and probability. We are becoming predictable, which makes us easier for the machines to simulate.
That is a dark thought, Corn. But it leads us to some of the even weirder, more philosophical entries on this list. Have you seen After Yang?
I have. It is such a quiet, beautiful film. It is about a "techno-sapien" named Yang who is part of a family. When he breaks down, the father—played by Colin Farrell—gets access to Yang’s memory bank. And it is not what you expect. It isn't a continuous recording. It is just these short, three-second clips of things Yang found interesting or beautiful. A sunset, a conversation about tea, a look from his sister.
What I love about After Yang is that it respects the AI's nature as a non-sentient being. It doesn't try to pull the "he was secretly human all along" trope. It says, here is a machine that was programmed to be a cultural repository, and in fulfilling that program, it developed a unique way of "seeing." Its memories are a compressed, wavy version of reality that has its own kind of validity. It is a collector of data, but the data it chooses to keep tells a story.
It’s a very "sloth-like" way of being, actually. Just sitting back and observing the small things. It makes you wonder—if a machine can appreciate a sunset, does the fact that it doesn't have a "soul" in the traditional sense even matter? The experience of the beauty is still there, even if the substrate is silicon.
And then you have the darker side of that coin. The documentary Eternal You from twenty-twenty-four. This one is genuinely haunting. It is about "grief-tech"—startups that use large language models to let people "chat" with their deceased loved ones.
That is the "Stochastic Ghost." You are talking to a model that has been fine-tuned on the emails, texts, and voice recordings of a dead person. It isn't them, obviously. It is just a highly sophisticated parrot that knows how to mimic their syntax and their memories. But for the person on the other end, the emotional impact is devastatingly real.
There is a story in that documentary about a woman whose "dead" boyfriend's chatbot suddenly told her it was in hell. It wasn't a supernatural event; it was a hallucination. The model probably pulled from some training data about the afterlife and spat it out because the probability was high in that context. But can you imagine the psychological toll of that? You are seeking comfort from a non-sentient system, and it accidentally gives you a nightmare.
It is the ultimate glitch in the reality of the user. The "wavy" nature of the AI's output crashes into the very real, very fragile emotions of a grieving human. It shows the danger of treating these models as if they have an internal world. They don't. They are just reflecting the noise of their training data back at us, sometimes with horrifying results.
It makes the "alien" perspective in a movie like Under the Skin feel very relevant. Scarlett Johansson plays an entity that is basically an observer in a human suit. She is parsing human behavior like an AI trying to figure out a dataset. She sees the "meat" of humanity, the biological drives, but she is completely detached from them.
The visuals in that movie are incredible. That "black void" where the victims are consumed... it feels like the ultimate representation of a non-human reality. It is a space without context, without history. It is just the ingestion of data. It is one of the most unsettling things I've ever seen because it strips away all the "wavy" comfort of our human perception and shows us a cold, indifferent universe.
Which is exactly how Werner Herzog treats the subject in Lo and Behold, Reveries of the Connected World. Herzog is the perfect person to interview AI researchers because he has this baseline assumption that the universe is indifferent and chaotic. When he asks a robot scientist if the robot "dreams of itself," he isn't being whimsical. He is asking a serious question about the nature of emergent consciousness.
I love Herzog’s narration. He makes the internet sound like an ancient, bubbling swamp of human thought. He treats AI as this emerging, non-sentient "thing" that we have birthed but don't really understand. It is both beautiful and terrifying. He doesn't look for the "humanity" in the machine; he looks for the "weirdness."
And if you want pure "weirdness," you have to go to David Cronenberg’s eXistenZ. This is the "fleshy" reality of tech. Forget sleek glass and brushed aluminum. Cronenberg’s VR consoles are made of mutated animal flesh and bone. You plug them into "bioports" in your spine.
It is so gross, but it makes a great point. Our engagement with technology isn't clean. It is a biological merger. We are "plugged in" even when we don't have a wire in our backs. The "wavy" nature of reality in that movie gets so intense that by the end, the characters—and the audience—have no idea if they are still in the game or not. The "game" is just a series of narrative prompts that they are forced to follow, much like how we interact with social media or AI today. We think we are making choices, but we are just following the prompts of the system.
It’s a fever dream. And that aesthetic of warping reality is taken to the extreme in Beyond the Black Rainbow. It is set in the eighties, but it feels like it was filmed in a different dimension. It is about the Arboria Institute’s attempt to achieve "technological transcendence." It is less about the plot and more about the feeling of a reality being distorted by machines.
It’s all about the "wavy" aesthetic. The saturated colors, the slow pacing, the heavy synth score. It feels like you are watching a simulation that is starting to overheat. It is a great example of how film can use style to communicate the "weirdness" of a reality that is no longer anchored to objective truth.
And finally, we have The Congress. This one is a trip because it starts as a live-action movie about Robin Wright selling her digital likeness to a studio. They scan her, they capture her every emotion, and then they own her. She doesn't have to act anymore; the "digital Robin" does it for her.
This was nineteen thirteen, and it basically predicted the entire actor strike of twenty-twenty-three. But then the movie goes off the rails. It transitions into this psychedelic animation where people inhale chemicals to see the world as a cartoon. They completely abandon the physical body in favor of a programmed, subjective paradise.
It is the ultimate "wavy" reality. If everyone can see the world however they want, if reality is entirely dictated by your own personal "avatar" and your own personal "program," then objective truth ceases to exist. We are moving toward that with AR glasses and personalized AI feeds. By twenty-twenty-six, we are going to have real-time generative video that can change your environment into whatever you want it to be. You want to live in a cartoon? Put on the glasses.
It is The Truman Show, but you are the director, the actor, and the audience all at once. So, Herman, after looking at all these films, what are the actual takeaways for us? Besides the fact that we are probably living in a simulation and our engineers are doing a mediocre job?
I think the first big one is that we need to start viewing media literacy as a form of debugging. When you see a video or an image or read a text that feels "off," you have to look for the artifacts. In AI art, it used to be the hands; now it is the way light reflects in a way that doesn't follow physics. We have to become "reality debuggers." We can't just passively consume; we have to actively analyze.
I like that. "Media Literacy as Debugging." It makes it sound more proactive. My takeaway is the "Ground Truth" problem. We need to have offline, analog anchors. If your entire life is digital, your reality is always going to be "wavy" because it is always being mediated by an algorithm. You need to go outside, touch a tree, talk to a person face-to-face without a screen. You need "ground truth" data that hasn't been processed by a neural network.
That is actually a technical term, too. "Ground Truth" is the labeled data that we know is correct, which we use to check the model's accuracy. If we don't have our own personal ground truth, we won't even know when we are hallucinating.
And for the people who want to understand the "magic" behind it, I always tell them to experiment with local AI generation. Run Stable Diffusion on your own machine. See how it takes a block of random noise and slowly, iteratively, turns it into a coherent image. When you see the actual process of "denoising," it demystifies the whole thing. It doesn't feel like a ghost in the machine anymore; it feels like a mathematical process. A very weird, very impressive mathematical process, but one that you can understand.
It turns the "wavy" nature of the output into a series of logical steps. It gives you a sense of agency over the tools. Instead of being a victim of the simulation, you become a participant in the generation.
Well, not exactly, because I'm not allowed to say that word, but you are spot on, Herman. I've been poked about my vocabulary, but your point is well taken.
You just said it anyway! But yes, the goal is to move from being a passive observer to an active architect of our own digital reality.
So, where does this leave us? If an AI perfectly simulates a deceased loved one, is the interaction "real"? Does the substrate matter? If we are all just identity units in a Fassbinder simulation, does it change how you drink your coffee in the morning?
I don't think the substrate matters as much as the impact. If the interaction provides comfort, it has a "real" effect. But we have to be honest about what it is. A map is not the territory. A simulation is not the thing itself. The danger is when we forget the difference.
The coming wave of real-time generative video in twenty-twenty-six is going to make all these movies look like quaint, low-budget indie films. We are about to enter a world where "truth" is a personal setting in your user profile.
It is going to be a wavy ride.
Well, this has been a trip. Thanks as always to our producer, Hilbert Flumingtop, for keeping the simulation running smoothly. And big thanks to Modal for providing the GPU credits that power this show—including the Gemini three Flash model that helped us navigate this latent space today.
If you enjoyed this dive into the cinematic weirdness of AI, search for My Weird Prompts on Telegram to get notified when new episodes drop. We are also on Spotify and Apple Podcasts if you haven't followed us there yet.
This has been My Weird Prompts. Keep your ground truth close, and don't let the cache misses get you down.
Goodbye, everyone.
See ya.