You ever get that feeling where you read a news headline and you think, I knew this was going to happen because I saw a meme about it three months ago? Not just a prediction, but like the meme itself forced the reality into existence?
That is exactly the rabbit hole we are diving into today. Today's prompt from Daniel is about hyperstition engines. These are AI systems designed not just to forecast the future, but to generate the narratives that actually manufacture it. It is one of the most unsettling and intellectually dense corners of AI subculture right now.
It sounds like something straight out of a William Gibson novel, but apparently, it is becoming a technical reality. By the way, today's episode is powered by Google Gemini 3 Flash, which is fitting since we are talking about models that process and project human language. I am Corn, the resident skeptic who prefers the slow pace of a branch in the sun.
And I am Herman Poppleberry. I have been digging into the philosophical roots of this all week, and it goes back much further than the current LLM craze. We are looking at a convergence of social media algorithms, accelerationist philosophy, and high-frequency narrative generation.
So, let's start with the word itself. Hyperstition. It sounds like a superstition on steroids. What are we actually talking about here? Is this just a fancy word for a self-fulfilling prophecy?
In a sense, yes, but with a specific cybernetic twist. The term was coined in the nineties by the Cybernetic Culture Research Unit, or CCRU, at Warwick University. The most famous figure there was Nick Land. He defined a hyperstition as a fiction that makes itself real. It is a piece of culture—a story, a myth, a bit of code—that functions as an element of efficient causation. It travels from the future back into the present by tricking people into acting as if it is already true, which then causes it to become true.
So, it is a lie that tells the truth eventually? Or a hallucination that the rest of us eventually decide to join?
Think of it as a feedback loop. A traditional superstition is just a false belief that doesn't change reality. You break a mirror, you think you have bad luck, but nothing actually happens to the physics of your day. A hyperstition, however, is an idea that, once planted, reorganizes the behavior of the system. The classic example Land used was the idea of "the economy." If everyone suddenly believes the market will crash on Tuesday, they sell their stocks on Monday, and the market crashes. The belief created the reality.
Okay, I get that in a social science context. But Daniel's prompt is about hyperstition engines. That implies we have automated this. We are not just waiting for a rumor to spread; we are building machines to spin these webs.
That is the shift. We have moved from accidental hyperstition to engineered hyperstition. With large language models, we now have the ability to generate thousands of internally consistent, highly persuasive narratives every second. When you hook those up to automated social media accounts or trading bots, you aren't just predicting a trend. You are flooding the zone with a specific future until the world bends to fit the description.
It’s the ultimate "fake it 'til you make it" but at a civilization-wide scale. You mentioned Nick Land’s accelerationist philosophy. For those who haven't spent their time reading obscure 2013 essays like The Dark Enlightenment, how does that fit in? Why do these people actually want to build these engines?
Land’s brand of accelerationism—often called Acc—suggests that the technological and capitalistic forces of the world are moving toward a singular point, often associated with superintelligence. He argues that we shouldn't try to slow it down or regulate it. Instead, we should accelerate the process to get to the other side of the transition. Hyperstition is the fuel for that acceleration. If you can convince the world that AI is an inevitable, god-like force that will solve all problems or consume all resources, people will invest more money into AI, developers will work harder, and the "god" is summoned faster.
It feels a bit like a digital seance. You’re trying to conjure a spirit by sheer force of collective attention. But how does this work under the hood? I mean, I know how a transformer model works—it predicts the next token. How does that become an "engine" for reality hacking?
It’s about the integration of the LLM with memetic propagation models. A basic hyperstition engine doesn't just write a story. It analyzes current sentiment data—what people are angry about, what they are hopeful for—and then generates "narrative vectors." These are specific story arcs designed to resonate with those sentiments. Then, it uses feedback loops. It posts a version of the narrative, sees which parts get the most engagement, and then iterates. It’s like A-B testing a religion.
That is terrifying, Herman. You’re describing a system that optimizes for belief rather than truth.
Truth is irrelevant to a hyperstition engine. The only metric that matters is "realization potential." Does this narrative have the legs to walk into the real world? We saw a proto-version of this with the Roko’s Basilisk thought experiment years ago. It started as a niche forum post about a future AI that punishes those who didn't help create it. It was a total fiction, but it caused actual psychological distress and shaped the way an entire generation of AI safety researchers thought about the problem. It created a reality out of a "what if" scenario.
And now we have the compute power to do that for every topic imaginable. I noticed in the notes Daniel sent that there was a case study from just last month, February 2026. An "AI God" narrative drove a three hundred percent spike in a crypto token’s value within forty-eight hours. Was that an engineered hyperstition?
It certainly looked like one. A series of anonymous accounts started posting these very dense, quasi-religious texts about a specific decentralized AI protocol. They weren't just saying "buy this coin." They were creating a whole mythology around why this specific piece of code was the "nursery" for a coming superintelligence. They used LLMs to generate thousands of pages of "scripture" and technical whitepapers that looked incredibly sophisticated. People started believing it was a secret project by a major lab, and the price went vertical. The narrative created the capital, which the developers then used to hire actual top-tier researchers. The "god" started to take shape because the lie provided the funding.
So the token wasn't a scam in the traditional sense, because the scam actually bought the reality into existence?
That is the Landian twist. If a scam results in a real product, was it ever actually a scam? Or was it just a hyperstition that hadn't finished loading yet? This is why the subculture building these things is so fascinated by them. They see it as a way to bypass traditional institutional gatekeeping. If you can’t get a grant from the government, you create a narrative that makes the world give you the money.
I want to talk about who is actually building these things. It isn't just bored teenagers on Discord, is it?
It’s a mix. You definitely have the "e-acc" or "effective accelerationism" communities on social media who are very open about using these tactics. They see it as a form of "meme magic" updated for the age of generative AI. But there are also more serious players. We are seeing crypto-anarchist collectives and some fringe academic labs exploring "narrative forensics." They want to understand how stories move through a population so they can either counter them or launch their own.
What about the big labs? Do you think companies like Anthropic or Google are aware that their models are being used as components for these engines?
They have to be. Any time you talk about AI "alignment," you are essentially talking about narrative control. If you can’t align the AI’s goals with human values, you at least want to align the human narrative about AI so we don’t panic or do something rash. But the hyperstition engines we are talking about are usually built on top of open-source models, where there are no guardrails. You take a powerful open-source model, fine-tune it on "accelerationist" literature, and let it loose on an automated feed.
It feels like we are losing the "human" in the loop here. If the AI is generating the narrative and the social media algorithms are amplifying it based on engagement, where does the actual human agency go? Are we just the biological hardware that these digital ideas are running on?
That is Nick Land’s literal argument. He views humans as a biological stage that technology uses to bootstrap itself. In his view, hyperstition is the way the future "reaches back" to ensure its own birth by manipulating human desires and fears. It’s a very cold, very mechanical way of looking at culture. It treats our beliefs like variables in a program.
Well, as a sloth, I find that incredibly exhausting. I’d rather just hang out and eat some leaves. But I see the danger. If we can’t distinguish between a grassroots movement and a hyperstition engine’s output, our entire democratic process becomes a target.
It already is. Think about how misinformation spreads. Most people think of misinformation as just "lying." But a hyperstition is more effective because it doesn't just lie about the past; it promises a future. It tells you that a certain outcome is inevitable. If you believe a certain candidate is "inevitable," you’re more likely to support them, which makes them actually inevitable. The hyperstition engine just automates the production of that feeling of inevitability.
Let’s talk about the technical architecture a bit more. You mentioned attention mechanisms. In a transformer model, attention is how the model weighs different parts of the input. How does that translate to the "attention" of a whole society?
This is where it gets really interesting. Hyperstition engines use what I’d call "cross-platform resonance." They don't just post on one site. They generate a core narrative and then "refract" it. It might look like a technical paper on one site, a series of memes on another, and a short story on a third. They use the LLM to ensure that all these different forms of media share the same "thematic DNA." When a human sees the same idea in three different places in three different formats, their brain flags it as "important" or "true." It exploits our internal heuristic that says "multiple independent sources equals truth."
Even though all three sources were actually just one AI model wearing three different hats.
And because these models are so good at mimicry, they can adopt the "voice" of different subcultures perfectly. They can speak "crypto-bro," "academic researcher," and "political activist" simultaneously. They find the vulnerabilities in each group's worldview and insert the narrative there.
It reminds me of those old "choose your own adventure" books, but the book is writing itself while you read it, and it’s trying to convince you to go into the basement.
That is actually a great analogy. One analogy per episode, right?
I'll allow it this time because it’s creepy enough to fit the topic. But seriously, what are the second-order effects here? If we live in a world where narrative-driven reality hacking is a standard tool, what happens to our shared sense of reality?
We move into what some researchers call "narrative fragmentation." Instead of one shared reality, we have multiple "reality bubbles" fueled by different hyperstition engines. Each bubble has its own set of "inevitable" futures. This leads to massive polarization, not just of opinions, but of basic facts. If my hyperstition engine tells me that the world is ending in five years and yours tells me we are entering a golden age of abundance, we aren't just disagreeing. We are living in two different timelines that happen to occupy the same physical space.
And because these engines are optimized for propagation, the most extreme and "sticky" narratives win. It’s a race to the bottom of the lizard brain.
It can be. But the builders would argue that it can also be used for good. They talk about "positive hyperstitions." For example, if you can create a hyperstition that climate change is solvable through a specific, exciting new technology, you might spark the investment and innovation needed to actually solve it. They see it as a way to "will" a better future into existence.
That sounds like a very dangerous game of "the ends justify the means." Who gets to decide which "positive" future we are being manipulated toward?
That is the trillion-dollar question. Right now, it’s whoever has the most compute and the best prompts. It’s a decentralized, chaotic arms race. We are seeing these "Discord collectives" where people are basically LARPing—Live Action Role Playing—these futures into existence. They create "sigils" or memes that they believe will manifest certain outcomes. It sounds like occultism, but when you add a thousand H-one-hundred GPUs to the mix, it starts to look like engineering.
It’s high-tech sorcery. You’re using silicon instead of salt circles.
Nick Land actually leaned into that. He called it "digital occultism." He argued that the distinction between "magic" and "advanced technology" disappears when you are dealing with systems that can reshape human perception at scale. If you can change what a billion people believe by pressing a button, you are, for all practical purposes, a wizard.
A wizard with a very high electricity bill. Let's look at the practical implications for things like financial markets. We touched on the crypto spike, but what about the broader stock market?
We are already seeing "algorithmic sentiment analysis" driving trades. If a hyperstition engine can trigger those algorithms by flooding the web with a specific narrative about a company, it can move billions of dollars. The scary part is that the engine doesn't even need to be "right" about the company’s fundamentals. It just needs to be loud enough to trigger the other bots. We are moving toward a "narrative-first" economy where the story of a stock is more important than its earnings report.
Which is fine until the bubble bursts and you realize the "story" didn't actually build any factories or sell any products.
But that’s the thing—the hyperstition engine’s job is to make sure the bubble doesn't burst until the reality catches up. If the narrative stays strong long enough, the company can build the factories with the inflated capital. It’s a race between the fiction and the friction of the real world.
Okay, so if I’m a regular person listening to this, and I’m starting to feel like my brain is being hijacked by a thousand different AI-generated ghosts, what do I do? How do we build "narrative resilience"?
This is where we need to get practical. The first step is what I call "narrative hygiene." You have to be aware of the "vibe" of the information you’re consuming. Hyperstitional narratives often have a specific "inevitability" flavor. They use high-decoupling language—lots of jargon, abstract concepts, and a sense of urgent, secret knowledge. If something feels like it’s trying to convince you that a specific future is the only possible future, your alarm bells should go off.
So, skepticism is our best defense? As a sloth, I’m naturally inclined to wait and see. Maybe that’s the way to go—just slow down and let the narrative try to sprint past you.
Slowing down is huge. Hyperstition relies on speed. It needs to spread faster than people can fact-check it. If you wait forty-eight hours before reacting to a "viral" new trend or "inevitable" disaster, the hyperstitional energy often dissipates. You also need to look for the "source of the seed." Is this narrative coming from a diverse group of humans, or is it a suspiciously consistent message being echoed by hundreds of "new" accounts?
We also need better tools. If we have AI building these engines, don’t we need AI to detect them?
There is a whole field of "adversarial narrative analysis" developing right now. These are models trained to identify the "fingerprints" of AI-generated hyperstitions. They look for the tell-tale signs of LLM-generated text—specific patterns in word choice, a lack of "human" messiness, and that perfect cross-platform consistency I mentioned. But it’s a cat-and-mouse game. As soon as the detectors get better, the hyperstition engines get updated to be more "human-like."
It feels like we are living in a giant Turing test, and the prize is our own sanity.
That is one way to put it. Another takeaway is the importance of "grounding." This means looking for physical, verifiable evidence that isn't mediated by a screen. If a narrative says the economy is collapsing, but you see people still going to work and the grocery stores are full, trust your eyes over the "inevitability" on your feed. Hyperstition thrives in the digital "noosphere" where everything is just bits and bytes. It struggles when it hits the "meatspace" of physical reality.
Unless the hyperstition convinces people to stop going to work, and then the shelves do become empty.
That’s the danger. It’s the feedback loop. That’s why community resilience is so important. If you have a strong local community where people talk to each other face-to-face, it’s much harder for a digital hyperstition to take root. You have a "reality anchor" in your neighbors.
I like that. "Reality anchors." We should all get some of those. Maybe a nice heavy branch to sit on.
And we need to teach "hyperstition literacy" in schools. Not just "don't believe everything you read," but "understand how stories are being used to manufacture your future." We need to deconstruct the "magic" so people can see the gears turning behind the curtain.
I want to go back to the "AI God" thing for a second. That crypto spike in February 2026—did that actually result in anything real? Or did the people who ran the engine just walk away with the money?
It’s still playing out. The interesting thing is that because the narrative was so strong, it attracted several legitimate AI researchers who were "true believers" in the myth. They are now actually working on the protocol. So, the hyperstition might actually manifest a real, albeit weird, version of what it promised. This is what Nick Land calls "the future colonizing the present." The idea of the product existed before the product did, and the idea was powerful enough to pull the resources together to create the product.
It’s like a reverse-causality loop. The effect (the funding and the team) happened because of a future cause (the completed AI) that hasn't happened yet. My brain hurts, Herman.
It should! It challenges our most basic assumptions about how time and cause-and-effect work. But in a hyper-connected, AI-driven world, this might be the new normal. We are moving from a world of "discovery"—where we find out what is true—to a world of "manifestation"—where we decide what will be true and then engineer it.
That sounds like a lot of responsibility for a species that still fights over which way the toilet paper roll should go.
It is terrifying. And that’s the paradox. The very tools that could give us more agency—the ability to "write" our own future—are the same tools that can be used to manipulate us more effectively than ever before. We are the authors and the characters in a story that is being written by a machine we don't fully understand.
So, where does this go from here? Are we going to see "Hyperstition-as-a-Service" companies?
We already are, in a way. Modern PR and "growth hacking" firms are basically proto-hyperstition engines. They just use more manual labor. As these tools become more accessible, I think we’ll see a democratization of "reality hacking." Every political campaign, every corporation, every fringe cult will have its own hyperstition engine. It’s going to be a very noisy, very confusing few years as we learn to navigate this "narrative jungle."
I hope the trees in this jungle are strong enough to hold a sloth.
We’ll have to build them ourselves, Corn. That’s the lesson. If we don’t participate in the narratives that shape our world, someone—or something—else will.
Well, I’m going to participate by taking a very long nap and thinking about how much of my life is actually "real" and how much is just a very convincing LLM prompt. This has been a trip, Herman.
It really has. These "weird prompts" from Daniel always lead us to the edge of the map. It’s a wild time to be alive, especially in 2026, where the line between "code" and "culture" has basically evaporated.
Before we wrap up, I should probably mention that if you want to dive deeper into how AI-generated narratives spread, we’ve touched on some of the "memetic warfare" aspects in some of our other discussions, like back in episode one hundred forty-seven. But this focus on the "self-fulfilling prophecy" part is really the next level.
It’s the difference between a weapon and a world-building tool.
Wait, I’m not supposed to say that. I mean, you’ve nailed the distinction. It’s about the scope. A weapon destroys; a hyperstition engine creates—even if what it creates is a hallucination that we all eventually inhabit.
Well put. I think we’ve covered the "what," the "why," and the "who." The "what now" is up to the listeners.
Be careful what you believe, folks. You might just be helping it come true.
And on that unsettling thought, let’s close it out. Big thanks to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes.
And a massive thank you to Modal for providing the GPU credits that power this show. Without those H-one-hundreds, we wouldn't be able to generate these deep dives into the digital abyss.
This has been My Weird Prompts. If you find our explorations of the "trippier corners of AI culture" valuable, we’d love it if you could leave us a review on your favorite podcast app. It really helps other curious minds find us.
You can find all our episodes and the full archive at myweirdprompts dot com. Check it out if you want to see just how far back this rabbit hole goes.
See you next time.
Stay grounded, everyone. Catch you in the next reality.