You ever get that feeling when you are scrolling through your feed that you are not actually talking to people? It is like you are shouting into a void, but the void is shouting back with very specific, very repetitive opinions.
It is funny you say that, Corn. It is actually a documented phenomenon often called the dead internet theory. The idea is that most of what we see online is no longer human-generated. I am Herman Poppleberry, by the way, for anyone just joining us. And today, we are diving into a topic that our housemate Daniel sent over. He has been spending a lot of time on X lately, trying to get real-time news, but he is noticing this persistent chatter about botnets and influence campaigns.
Right, and Daniel raised a really fascinating point. He mentioned how parties like Likud here in Israel or the Kremlin abroad are often accused of using these massive networks of fake accounts. But his big question was about scalability. How do you actually make a fake person look real enough to be credible without spending a thousand hours on every single profile? It seems like there is this inherent tension between quantity and quality.
It is the classic trade-off. Historically, if you wanted ten thousand accounts, they were usually going to look like junk. If you wanted one really convincing undercover agent, it took a human years to build that persona. But the technology has shifted the goalposts significantly, especially in the last year or two. We are now in an era where quality scales just as easily as quantity.
So, let us start there. When we talk about a botnet versus a sock puppet, are we talking about the same thing? Because I think people use those terms interchangeably, but they feel different in practice.
That is a great distinction to start with. A botnet, historically, was just a network of compromised computers or accounts used to perform repetitive tasks. Think of it like a digital swarm. They were used for distributed denial of service attacks, or just to spam a hashtag until it trends. They were blunt instruments.
Right, like those accounts with no profile picture and a handle like User eight nine four two seven six who all tweet the exact same sentence at the exact same time.
Exactly. Those are easy to spot. But a sock puppet is more of a digital persona. It is a fake identity created for the purpose of deception. The term comes from the idea of a puppeteer putting their hand in a sock to talk to themselves. In the context of influence operations, a sock puppet is designed to look like a real person. They have a bio, they have interests, they interact with other people. The scalability challenge Daniel mentioned is exactly where the modern world of influence operations lives. How do you turn a bot into a sock puppet at scale?
And that is where the murky world of geopolitics comes in. Daniel mentioned he does not see direct evidence of these accounts, but then he noted that maybe they are not supposed to look suspicious. Is that the goal? To be invisible?
Invisibility is one goal, but the more effective goal is mimicry. You do not want to be invisible; you want to look like part of the furniture. You want to look like just another concerned citizen or an angry voter. There is a concept called astroturfing, which is basically creating a fake grassroots movement. If you see ten thousand people screaming about a specific policy, you might think, wow, there is a real groundswell of opinion here. But if those ten thousand people are actually just three guys in a basement in Saint Petersburg or a dedicated office in Tel Aviv using specialized software, then the reality is being manufactured.
It is wild because it targets our natural heuristic for truth. We tend to believe something more if we hear it from multiple sources. It is called the social proof principle. If I see one person saying a candidate is a lizard, I laugh. If I see five hundred people saying it, even if I do not believe they are lizards, I start to wonder if there is something else going on. But Herman, let us get into the how. Daniel’s point about scalability is the heart of this. Ten years ago, you had to hire people in click farms to manually manage these accounts. How has that changed?
It has changed because of what we call persona management software. Back in two thousand eleven, there were actually leaked documents from a United States defense contractor that talked about software allowing one operator to manage up to ten separate identities. These identities would have background history, distinct digital footprints, and they could even appear to be posting from different geographic locations.
One person managing ten accounts still does not feel like it would move the needle for a national election, though. That is just a drop in the ocean.
You are right, but that was fifteen years ago. Fast forward to today, in early two thousand twenty-six, and we have large language models and agentic AI. This is the game changer. Before, the bottleneck was language. You needed a human to write a tweet that did not sound like a machine translation. Now, you can feed a model a persona description. Tell it, you are a thirty-five year old nurse from Ohio who is skeptical of government spending. Now, write fifty different variations of a post complaining about the new infrastructure bill, and make sure to engage with anyone who replies using a polite but firm tone. It can do that in seconds, and it can do it for ten thousand different personas simultaneously.
So the scalability challenge is effectively solved by generative artificial intelligence. You can have a single operator overseeing thousands of accounts, each with a unique voice, a unique bio, and a unique posting history, all generated on the fly.
Precisely. And they can interact with each other. This is the part that really gets people. These bots can be programmed to like each other’s posts, reply with supportive comments, and create a little echo chamber that looks like a community. They can even engage in what is called narrative laundering.
Narrative laundering? That sounds like something out of a spy novel.
It is not far off. It is the process where a piece of misinformation starts on a low-credibility site or a bot account, then gets picked up by these sock puppets, then gets shared by real people who are misled, and eventually, it might even get cited by a legitimate news organization or a politician. By the time it reaches the mainstream, the original source is buried under layers of social proof. We saw this with the Doppelganger campaign out of Russia, where they created perfect clones of major news sites like Der Spiegel or Fox News to host fake stories, which were then blasted out by tens of thousands of AI-managed accounts.
I see this a lot on X. You will see a screenshot of a headline, but you can not find the actual article. Or a quote that seems just a little too perfect for a certain political narrative. But Daniel mentioned the Likud party here in Israel. There was that big report a few years ago by Big Bots Project, and more recently, reports about an Israeli firm called Stoic that was caught running an influence campaign targeting United States lawmakers. What was the takeaway there?
The Israeli context is fascinating because it is so small and the language is so specific. In the Hebrew-speaking web, you can not just use a generic translation. The slang and the cultural references are very distinct. The Stoic campaign, which was uncovered in twenty-twenty-four, used ChatGPT to generate comments on X and Facebook. They created fake news sites with names like Non-Partisan-Post to give their claims an air of authority. What was interesting was that they were not just using bots; they were using what researchers call cyborg accounts.
Cyborg accounts? Like half-human, half-machine?
Exactly. It is a human who uses automated tools to boost their reach. They might write the core message, but then use a script to blast it out across fifty different platforms or use a botnet to give it those first thousand likes to trick the algorithm into thinking it is viral. It is a hybrid approach that solves the credibility problem Daniel was worried about. You have a human mind behind the strategy, but machine power behind the execution.
That makes so much sense. It is like a force multiplier. But if these things are so prevalent, why do we not see them? Daniel said he does not see direct evidence. Is it just that we are not looking for the right signs?
It is because the best ones are designed to be indistinguishable from your crazy uncle or that one guy from high school who got really into politics. If a bot is doing its job well, you should not know it is a bot. You should just think, man, there are a lot of people who disagree with me today. And that is the real danger. It is not about changing your mind on a specific fact. It is about changing your perception of what everyone else thinks. It is about manufacturing a majority.
It is a psychological operation, then. It is not a technical one. The technical part is just the delivery mechanism.
Right. There was a study by the Oxford Internet Institute that looked at computational propaganda in over eighty countries. They found that almost every major government is now using some form of social media manipulation to influence their own citizens or foreign populations. This is not just a Russian thing or a Likud thing. It is the new standard for political communication.
So if I am an operator for one of these states, and I have my persona management software and my large language models, how do I actually get these accounts to stay alive? Do not platforms like X have detection systems?
They do, but it is an arms race. For every detection method, there is a bypass. For example, platforms look for accounts that post too frequently. So, the botnets now use jitter. They randomize the timing of their posts so they do not look mechanical. Platforms look for duplicate content. So, the bots use those large language models to rewrite every post. Platforms look for accounts that were all created on the same day. So, there is a massive black market for aged accounts.
Aged accounts? Like fine wine?
Kind of. You can go onto certain forums and buy accounts that were created in twenty-fifteen. They have a history, they have some old posts about cats or sports, and they have been sitting dormant for years. An influence operator will buy ten thousand of these accounts, change the bios, and suddenly they have a credible-looking army that bypasses the new account filters.
That is incredibly cynical. You are literally buying the ghost of someone’s old social media presence to use as a skin for a political operative.
It is a huge industry. And it is not just for politics. It is used for stock market manipulation, for promoting movies, for burying bad reviews. But when it is used by a state to influence an election or a conflict, the stakes are obviously much higher.
I want to go back to the scalability challenge because I think there is another layer here. Daniel mentioned that if they are going to be credible, they require a lot of work. We have talked about how artificial intelligence helps with the writing, but what about the visual side? The profile pictures? I remember when everyone was using those This Person Does Not Exist AI faces, but those had telltale signs, like weird ears or blurry backgrounds.
Oh, that is a solved problem now. With modern diffusion models, you can generate a consistent persona. You can create a person and then generate photos of them in different settings. Here is Sarah from Seattle at a coffee shop. Here is Sarah at a protest. Here is Sarah’s dog. It is remarkably easy to create a visual life for these characters. And because these models can generate images that are completely unique, they will not show up in a reverse image search.
So the reverse image search, which used to be our gold standard for debunking fake accounts, is now basically useless against a well-funded state actor.
Completely useless. In fact, some of the more sophisticated operations will use real photos of people from obscure social media sites in other countries, or even use photos of people who do not exist but are generated to look like they are from a specific demographic. They will even create a LinkedIn profile for the fake persona to give it professional credibility.
It feels like we are entering a post-truth era where the cost of creating a believable lie has dropped to near zero, while the cost of verifying the truth remains high.
That is exactly the imbalance that these operations exploit. It takes me five seconds to generate a convincing lie and blast it to a million people. It might take a team of journalists five days to thoroughly debunk it. By that time, the lie has already done its work. The narrative has shifted.
Let us talk about the Kremlin for a second, because they are often cited as the masters of this. What makes their approach different?
The Russians, specifically the Internet Research Agency and its successors, pioneered what they call the multi-vector approach. They do not just push one side of an argument. They will create accounts on both sides. They will have a fake activist account and a fake reactionary account. They will have them argue with each other. The goal is not necessarily to make you vote for a specific person; it is to make you hate your neighbor. It is about social destabilization. They want to amplify existing tensions until the fabric of society starts to fray.
That is a much deeper level of manipulation. It is not just I want you to believe X. It is I want you to believe that everyone else is your enemy.
Exactly. They are looking for the fault lines in a society. In the United States, it is race and religion. In Israel, it is the religious-secular divide or the right-left political split. They find those wounds and they just keep poking them with these digital needles until they become infected.
So when Daniel says he does not see these suspicious accounts, maybe it is because he is looking for a bot, but what he is actually seeing is a perfectly crafted mirror of his own frustrations or the frustrations of people he disagrees with.
That is the most likely scenario. If you see a post that makes you incredibly angry at the other side, that is exactly when you should be most suspicious. These operations thrive on high-arousal emotions. Anger, fear, outrage. Those are the things that make us share a post without thinking.
It is like they are hacking our limbic system. Our brains are evolved to react quickly to threats and social conflict, and these botnets are just feeding that instinct twenty-four seven.
It is a specialized form of engineering. They call it cognitive hacking. You are not hacking the computer; you are hacking the person sitting in front of the computer. And because it is scaled with artificial intelligence, they can run thousands of these experiments simultaneously to see which headlines get the most clicks and which narratives cause the most division.
Okay, so we have established that it is happening, that it is incredibly sophisticated, and that the scalability challenge has been largely overcome by generative artificial intelligence and persona management software. But I have to ask, what is the counter-move? If the platforms can not keep up, are we just doomed to live in a world of digital ghosts?
It is a tough question. There are a few things happening. One is that researchers are getting better at identifying coordinated inauthentic behavior. This is a term coined by Meta, but it is used across the industry. They look for patterns that are invisible to the naked eye. For example, they look at the network graph. How are these accounts connected? Do they all follow each other in a way that is statistically impossible for real humans? Do they all share the same metadata?
Metadata? Like what?
Even if the text is unique, the underlying technical signature might not be. Maybe they are all using the same specific version of a browser, or they are all coming through the same set of proxy servers. Or, and this is a big one, they all have the same temporal fingerprint.
You mean they all sleep at the same time?
Exactly. If ten thousand accounts claiming to be from all over the United States all go silent at exactly three in the morning Moscow time, that is a pretty big red flag. Researchers use these patterns to map out the entire network and then take them down all at once.
But that feels like a game of whack-a-mole. You take down ten thousand, and they just spin up another ten thousand the next day.
It is whack-a-mole. But there is also a push for what is called digital provenance. This is the idea of being able to track where a piece of content came from. There is a group called the Coalition for Content Provenance and Authenticity, or C-two-P-A. They are working on standards that would embed metadata into images and videos at the moment of creation, so you can tell if a photo was taken by a real camera or generated by an artificial intelligence.
That sounds promising for images, but what about text? You can not really embed metadata into a tweet.
Text is much harder. There are attempts to create AI watermarking, where the language model leaves a subtle mathematical pattern in the words it chooses, but those are notoriously easy to break. You can just ask another AI to rewrite the text, and the watermark is gone.
So for text-based platforms like X, we are really left with our own critical thinking. Which brings us to the practical side of this. If Daniel, or any of our listeners, is on X and they want to know if they are engaging with a real person or a state-sponsored sock puppet, what should they look for?
There are a few red flags, though none are foolproof. First, check the account’s history. If they only ever post about one topic, and they post about it with extreme intensity, that is a sign. Real people have hobbies. They talk about the weather, or their dinner, or a movie they saw. If an account is a hundred percent political outrage, be suspicious.
What about the followers? I always look at who is following them.
That is a good one. If an account has five thousand followers, but they are all accounts with no profile pictures and gibberish names, that is a bot-boosted account. Also, look at the engagement. If a post has ten thousand likes but only five comments, and those comments are all things like great point or I agree, that is a sign of artificial inflation.
Another thing I have noticed is the join date. If a whole bunch of accounts all joined in the same month and all have the same posting patterns, that is usually a coordinated campaign.
Absolutely. And honestly, the biggest thing is to just be aware of your own emotional state. If a post is designed to make you feel white-hot rage, take a breath. Ask yourself, who benefits from me feeling this way? Often, the answer is not a fellow citizen; it is an adversary who wants to see your society paralyzed by infighting.
That is a really powerful point. The goal is not just to win an argument; it is to destroy the possibility of having an argument at all. If we all think everyone else is a bot or a shill, we stop talking to each other. And that is when the democratic process really breaks down.
Exactly. It is what researchers call the liar’s dividend. When people know that deepfakes and bots exist, they stop believing anything. They become cynical and disengaged. And a cynical, disengaged public is much easier to manipulate than an informed, active one.
It is funny, Daniel mentioned the Likud party here in Israel. We have seen some of this play out in our own backyard. During the last few election cycles, there were these huge networks of accounts that were just relentless. And it was not just about promoting Likud; it was about attacking the judiciary, attacking the press, attacking anyone who was seen as an obstacle. It felt like the entire digital space was being poisoned.
It is a strategy. If you can not win the debate on the facts, you change the environment so that facts do not matter anymore. You flood the zone with noise until people just give up on trying to find the signal. Steve Bannon famously called this flooding the zone with something less pleasant than noise. It is a deliberate tactic to overwhelm the human capacity for processing information.
So, looking forward, how does this evolve? We are in February of two thousand twenty-six. We have already seen how generative AI has changed things. What is the next step for these botnets?
I think we are going to see more personalized influence. Imagine a bot that does not just blast out a message to everyone, but instead analyzes your specific profile, your likes, your past comments, and then crafts a specific message just for you. It is micro-targeting on steroids. It is not just an ad; it is a conversation. You might think you are having a debate with a stranger in a comment section, but that stranger is actually an AI designed to slowly nudge your opinion over the course of weeks.
That is terrifying. It is like a custom-built radicalization engine for every individual user.
It is the logical conclusion of the technology. We have already seen early versions of this in customer service and sales. Moving it into the political and geopolitical sphere is inevitable. The only defense we really have is to move away from these massive, anonymous platforms and back toward smaller, more verified communities.
Like the old-school forums or even just real-life interactions?
Exactly. There is a reason people are moving toward encrypted messaging apps like Signal or WhatsApp for their news and discussion. You know who is in the group. You have a baseline of trust. The era of the global digital town square might be coming to an end because we can no longer verify who is actually in the square with us.
It is a sad thought, in a way. The promise of the internet was this global connection. But if that connection is being hijacked by state actors to turn us against each other, maybe we need to pull back.
It is a period of adjustment. We are learning how to live with these tools. Just like we had to learn how to deal with yellow journalism in the nineteenth century or propaganda on the radio in the twentieth, we have to develop a new kind of digital literacy for the twenty-first.
So, to go back to Daniel’s original question: are botnets actually being used to manipulate social discourse? Yes, absolutely, and at a scale and sophistication that is hard to wrap your head around. And the scalability challenge? It has been solved. The fakes are not just compelling; they are becoming indistinguishable from reality.
It is a brave new world, Corn. Or maybe a very old world with very new toys. The motives are the same as they have always been—power, influence, control. It is just that the tools are now capable of operating at the speed of light.
Well, on that slightly heavy note, I think we have given Daniel and our listeners a lot to chew on. It is not about being paranoid; it is about being aware. The next time you see something on X that makes your blood boil, just remember: it might be a person, but it might also be a very clever piece of code with a very specific agenda.
Well said. And hey, if you are a real human listening to this, and not a bot—though if you are a bot, I hope you are enjoying the intellectual exchange—we would really appreciate it if you could leave us a review on your podcast app or on Spotify. It genuinely helps other real humans find the show.
Yeah, it really does. And you can always find us at myweirdprompts.com. We have got the full archive there, and a contact form if you want to send us a prompt like Daniel did. We love diving into these murky topics.
Definitely. Thanks for joining us for episode five hundred eighty. We have covered a lot of ground today, from Russian troll farms to Israeli political bots to the future of AI-driven persuasion.
It has been a trip. Thanks for the expertise as always, Herman.
Any time, Corn. This has been My Weird Prompts.
See you all next time. Goodbye.
Goodbye.