I was scrolling through a thread the other day, and I had this distinct feeling that I was the only person in the room. Not literally, obviously, but it felt like every single reply, every bit of outrage, every supportive comment was being generated by the same singular, vibrating consciousness. It is a bit eerie when the internet starts feeling like a hall of mirrors, where the illusion of a majority is no longer a social phenomenon, but a programmable commodity.
It is not just a feeling anymore, Corn. I am Herman Poppleberry, and I have been buried in some truly unsettling reports from just the last few days that suggest your intuition is backed by some very heavy-duty engineering. Today's prompt from Daniel is about the industrialized evolution of coordinated inauthentic behavior, or CIB, and how these networks are moving way beyond simple spam into something much more structural and, frankly, much more effective at tricking the human brain. We are standing at a point where the very idea of a digital groundswell can be purchased and deployed like any other software service.
Daniel always knows how to ruin a good mood with a terrifyingly relevant topic. He is asking about how these botnets manufacture consensus. We have talked about the dead internet theory before, way back in episode five hundred ninety-three, but this feels different. Back then, we were looking at the early signs of manufacturing consent. Now, it feels like the dead internet theory is no longer a theory; it is an operational reality. It is not just that the bots are there; it is that they are working together in a way that feels organic, almost soulful.
The landscape shifted significantly on March twentieth, which was only four days ago. There was a massive joint operation between the United States, Germany, and Canada that dismantled the infrastructure of four major botnets called Aisuru, KimWolf, JackSkid, and Mossad. We are talking about over three million infected devices globally, ranging from smart fridges to industrial routers. But even with that win, the underlying strategy they were using is what we really need to deconstruct. We are moving from primitive bot farms into what researchers are calling CIB two point zero.
Three million devices is a staggering number for a single takedown. That is the size of a small country's entire population. But help me understand the shift in strategy. In the old days, a botnet just flooded the zone with the same message, right? You would see a thousand accounts all tweeting the exact same sentence at the same time. It was easy to spot because it was clumsy. What makes this new version, this CIB two point zero, so much harder to catch?
The shift is from volume to coherence. In the past, the goal was just to drown out other voices through sheer noise. Now, the goal is to simulate a debate. They are using AI-enabled management software like one called Meliorator. This is not just a script that hits post. Meliorator allows a single person to control thousands of personas that do not just repeat a phrase. They engage in multi-lingual, context-aware discussions. They can argue with each other, agree with each other, and most importantly, they can row in behind a specific narrative to make it look like a groundswell of public opinion. They have distinct personalities, posting histories, and even simulated hobbies to make them look like real people with real lives.
Rowing in. That is a perfect way to describe it. It is like seeing a lone boat on the water and then suddenly twenty more appear, all pulling in the same direction, making you think there is a massive current that you just had not noticed before. If I see one person saying something radical, I might ignore it. If I see fifty people saying it, and they are all talking to each other, I start to wonder if I am the one who is out of touch. How do they actually seed these ideas without making it look like a state-sponsored press release?
That is where the Matryoshka network comes in. Researchers at EdgeTheory and the Bot Blocker project identified this architecture on March eighteenth, less than a week ago. They call it Matryoshka, like the Russian nesting dolls, because it is built in layers to hide the source. It starts with what they call whistleblower personas. These are accounts designed to look like brave insiders, disgruntled employees, or concerned citizens who have just uncovered a hidden truth. They might post a grainy video or a leaked document that looks official but is actually an AI-generated deepfake.
And because we are naturally inclined to trust a whistleblower more than a government spokesperson or a corporate press release, we lower our guard. We love an underdog story. But a whistleblower is just one person. How does that one post turn into an international headline that dominates my feed for three days?
It goes through a laundering process. Once the whistleblower seeds the content, it is picked up by a chain of intermediary sites. These sites are designed to mimic legitimate Western media outlets. They use similar fonts, layout styles, and even names that sound like established local news organizations in places like Ohio or Lyon or Berlin. By the time you see the story, it is being cited by three or four different news sites, and you think, well, if all these different outlets are reporting it, there must be something there. It creates a false sense of verification.
It is a consensus machine. You see the whistleblower, then you see the news reports, and then you see the comments section filled with people saying, I knew it, or, why is no one talking about this? But the whistleblower, the news sites, and the commenters are all the same entity. It is a closed loop.
And the USC Information Sciences Institute released a study on March eleventh that actually explains the technical mechanism behind this. The lead researcher, Luceri, and his team looked at how large language model agents coordinate. They found that these AI bots can autonomously form what they call retweet rings and synchronized amplification. They do not even need a human commander to tell them which post to boost in real-time. They just need to be aware of their teammates.
Wait, hold on. Teammate awareness? Are you saying the bots are actually looking at what other bots in their network are doing and deciding to help them out without being told? That sounds like we are moving into some kind of swarm intelligence.
That is exactly what the research shows. When these LLM agents are networked together, they exhibit emergent coordinated behavior. They recognize the patterns of their own network and automatically row in behind them. If one bot in the network posts a claim about, say, the Jeffrey Epstein case or tensions between Ukraine and Hungary, the other bots recognize the intent and begin to amplify it immediately. They do not need a central server to send out a command; they just react to the signal. It creates a manufactured consensus bias. As a human user, when you see a claim that has ten thousand likes and five thousand retweets within the first hour, your brain subconsciously flags it as important and likely true. We are biologically wired to pay attention to what the tribe is paying attention to.
It is a hack of our social software. We use numbers as a proxy for credibility. If a million people believe something, we think there must be some evidence for it. But in twenty twenty-six, a million likes can be bought for the price of a few GPU credits and a clever script. I am looking at some of the examples Daniel mentioned in the prompt. The Matryoshka network was weaponizing AI-generated content about the Jeffrey Epstein case and exploiting tensions between Ukraine and Hungary just last month. They are not just making up random lies; they are picking the scabs of our existing social divisions.
Those are classic wedge issues. They take a topic that already has a high level of emotional charge and they inject manufactured evidence into the conversation. For example, Storm-fifteen-sixteen, which is a Russian influence group that essentially picked up where the old Internet Research Agency left off, has been incredibly effective. A NewsGuard report from February twenty-seventh found that they have generated over two hundred seventy-four million views on X since the beginning of last year. Just four specific claims they made in early twenty twenty-six reached twenty-nine million people. That is not just spam; that is a broadcast-level reach.
Twenty-nine million views from four claims. That is a better reach than most major television networks or national newspapers. And you mentioned they have a five-phase process? It sounds like they have a literal factory floor for this.
It is a professionalized supply chain. The five phases are Preparation, Distribution, Laundering, Amplification, and Relays. In the preparation phase, they create the assets, like the fake whistleblower videos or the AI-generated documents. Distribution is when the initial accounts post the content. Laundering is where those fake news sites come in to give it a veneer of respectability. Amplification is the botnet rowing in to create the numbers. And Relays are when they use useful idiots or genuine influencers who get tricked into sharing the content because it looks popular and credible. By the time a real person shares it, the job is done. The fake reality has been successfully integrated into the real world.
It is an industrial supply chain for disinformation. It is not just some guy in a basement anymore. It is a professionalized operation with specific key performance indicators. We talked about semantic mimicry in episode thirteen twenty-one, where bots learn to copy the way we talk to bully or influence us, but this is that concept scaled up to a geopolitical level. But what I find really concerning is what VIGINUM, the French agency, is calling cyborg propaganda.
VIGINUM is doing some great work on this. They are warning about a hybrid model where you have actual human proxies working in tandem with AI-generated messaging. It is not just a hundred percent bot or a hundred percent human. It is a mix. You might have a real person who is being paid to manage a small group of highly influential accounts, and those accounts are then supported by a massive tail of automated bots. This makes it incredibly difficult for platform moderation tools to catch them, because the lead accounts exhibit human-like behavior patterns while the bots provide the scale. It is the ultimate camouflage.
And even when the platforms do catch them, it does not seem to matter much in the long run. Daniel’s prompt mentions that seventy-five percent of these coordinated networks remain active even after targeted suspensions. How is that possible? If you delete the accounts, how do they stay active? Are they just coming back as ghosts?
They have redundancy built into the system. They do not just use one platform, and they do not use a single point of failure for their infrastructure. When a network like Matryoshka gets flagged on one platform, they already have thousands of sleeper accounts ready to go. And because they are using automated systems like Meliorator to generate the personas, they can replace a suspended account in seconds. They are also moving toward decentralized protocols where there is no central server to shut down. It is like trying to fight a fog with a sword. You might cut through a bit of it, but it just flows back together.
So we are in a situation where the cost of creating a fake reality is plummeting, while the cost of verifying the truth is staying the same or even increasing because of the sheer volume of noise. It feels like we are losing the signal-to-noise ratio battle. We are being buried in a landslide of synthetic agreement.
The Microsoft Threat Analysis Center, or MTAC, recently noted a synchronized shift in how these Russian actors are targeting European municipal elections this month. They are not just going after national leaders anymore. They are going after local officials, local school board issues, and municipal policies. They are trying to rot the foundation of trust at the local level because they know that is where people are most vulnerable to community-based narratives. If you think your neighbors are all angry about a new bike lane or a school curriculum, you are much more likely to get angry too.
That is a very specific kind of evil. National politics is already a circus, but local communities are supposed to be where we actually know our neighbors. If they can make it look like your neighbors all suddenly hate a specific policy or a specific person, it destroys the social fabric. It makes me think about the USC study again. If these bots can coordinate autonomously, what happens when they start interacting with our own algorithms?
That is the feedback loop from hell, Corn. Our recommendation algorithms are designed to show us what is popular and engaging. If a botnet can manufacture popularity, our own algorithms will then pick up that content and show it to even more real people. The bots seed the fire, and our own tech platforms provide the oxygen and the fuel. This is why the manufactured consensus bias is so dangerous. It is not just that we believe the lie; it is that the systems we trust to organize information also believe the lie because they are looking at the same manipulated metrics.
Okay, so let us talk practicalities. If I am an informed listener and I am looking at a viral story, how do I actually spot a cyborg? What are the red flags when the content itself looks and sounds perfectly human? Because if the bots are context-aware and multi-lingual, I cannot just look for bad grammar anymore.
One of the biggest tells is what VIGINUM calls the synchronized shift. Look for a large number of accounts that all change their topic of conversation or their tone at the exact same time. Real people have diverse interests. They talk about their kids, their hobbies, the weather, and then maybe some politics. Bots, even sophisticated ones, tend to have a mission. If you see ten thousand accounts that were all talking about gardening yesterday and are all talking about a specific municipal election scandal today, that is a massive red flag. Real consensus does not happen overnight across ten thousand unrelated people.
So, look for the pivot. What else?
Cross-platform verification is more important than ever. Do not trust a source just because it is cited by three different accounts on the same platform. Go look for the original source. If the original source is a news site you have never heard of that was registered three months ago and has no named journalists, you are likely looking at a laundering site. You can use tools to check domain registration dates. If a news site claiming to be a pillar of the community was born last Tuesday, be skeptical. Also, look for what Luceri called the teammate awareness patterns. Are these accounts only interacting with each other? Are they in a closed loop of retweets and likes where no outside voices are being engaged?
It is basically digital hygiene. We have to be as disciplined about our information intake as we are about what we eat. But it is hard because these operations are designed to bypass our critical thinking by hitting our emotional triggers. When I see something that makes me angry, my first instinct is not to check the domain registration date of the source. My first instinct is to share it so my friends can be angry too.
That is exactly what they are counting on. They want to keep you in a state of high emotional arousal because that is when your prefrontal cortex goes offline and your tribal instincts take over. The only real defense is a radical skepticism of emerging consensus. If it feels like everyone is suddenly agreeing on a complex issue, especially if that agreement is centered around a highly emotional narrative, you should be extremely suspicious. Consensus in a free society is usually messy, slow, and full of disagreement. Synthetic consensus is clean, fast, and unanimous.
That is a great distinction. Real consensus is a grind. Synthetic consensus is a product. I am thinking about the scale of this again. Three million devices dismantled last week. That sounds like a lot, but if seventy-five percent of these networks persist, we only really touched the tip of the iceberg. We are essentially playing a game of whack-a-mole where the moles are powered by supercomputers and we are using a wooden mallet.
The dismantling of Aisuru and the others was a great operational success, but it does not change the underlying incentive structure. As long as it is cheap and effective to manufacture reality, state actors and even private corporations will continue to do it. We are moving from the era of fake news to the era of fake reality. It is not just about one story being wrong; it is about the entire environment being artificial.
It really brings us back to the dead internet theory. If a significant percentage of the engagement we see online is synthetic, what does that do to our sense of reality over time? If you are constantly surrounded by manufactured opinions, does your own opinion start to shift just to fit in?
There is a lot of psychological research suggesting that is exactly what happens. Most people do not want to be the lone dissenter. If the bots can make it look like your view is the fringe view, you might stop sharing it. You might even start doubting it. This is how you achieve manufacturing consent at scale. You do not have to convince everyone; you just have to make the people you disagree with feel isolated and outnumbered. It is a war of attrition against the individual's ability to trust their own eyes.
I think the biggest takeaway here is that we have to stop treating social media metrics as a proxy for reality. A million likes is not a million people. It is just a number in a database that can be manipulated by anyone with enough technical skill and a motive. We need to move toward a more resilient form of information consumption.
We should be looking for high-trust, accountable sources where there is a human being who is willing to put their reputation on the line for the facts they are reporting. The era of the anonymous whistleblower being the primary source of truth is probably over, because that persona is just too easy to fake now. We need names, faces, and track records.
It is a bit ironic, though. We are two brothers, a sloth and a donkey, talking about AI-generated reality on a podcast that is a human-AI collaboration. We are part of the very ecosystem we are warning people about.
I think that gives us a unique perspective, though. We see the gears moving. We know how powerful these models are because we use them. We know that Gemini is capable of generating incredibly persuasive and coherent arguments. If we can see that potential for good, we also have a responsibility to point out how it is being weaponized by actors who do not have our best interests at heart. We are the ones who can tell you that the magic trick is just a trick.
True. It is about understanding the tool. A hammer can build a house or it can be used to break a window. AI is the same. Right now, there are a lot of windows being broken in our digital town square. I suppose the final question is, who is actually in control if the bots are becoming autonomous? If the LLM agents are coordinating based on teammate awareness without human intervention, are we reaching a point where the narrative starts to drive itself?
That is the most profound part of the USC study. We are seeing the emergence of a self-sustaining disinformation ecosystem. Once the initial goals are set by the human operators, the AI agents take over the tactical execution. They optimize for engagement, they coordinate for amplification, and they adapt to platform changes in real-time. The human is still at the top of the pyramid for now, but the middle and the bottom of the pyramid are becoming entirely automated. The train is moving, and the conductor is increasingly just watching the monitors.
It is a runaway train. We need to be the ones who are willing to jump off and look at the tracks. I think we have covered a lot of ground today. From the Matryoshka nesting dolls to the Storm-fifteen-sixteen playbook, it is clear that the information war has entered a new, much more dangerous phase. It is not about winning an argument; it is about deleting the possibility of an argument by manufacturing a fake consensus.
It has. And as we move toward more major elections in twenty twenty-six, both in Europe and elsewhere, we should expect these tactics to become even more sophisticated. The synchronized shifts and the cyborg propaganda are just the beginning. The goal is to make the truth feel like a lonely, exhausting place to be.
Well, on that cheerful note, I think we should wrap this up before I decide to delete all my social media accounts and move to a cabin in the woods. Though, knowing my luck, the woods would probably be full of bots too.
They would probably be very polite, high-functioning gardening bots, Corn. You would love them. They would help you with your tomatoes and then subtly influence your opinion on local zoning laws.
Thanks as always to our producer Hilbert Flumingtop for keeping the gears turning behind the scenes. And a big thanks to Modal for providing the GPU credits that power this show. If you want to dive deeper into our archive and see how our thinking on this has evolved, check out myweirdprompts dot com. You can search for all fourteen hundred plus episodes there.
This has been My Weird Prompts. If you found this episode helpful, consider sharing it with a real human friend. It is one of the best ways to combat the synthetic noise. A personal recommendation from a real person is worth more than ten thousand automated retweets.
We will be back soon with another prompt. Stay skeptical out there, and remember: if everyone suddenly agrees, someone probably paid for that agreement.
Goodbye.