Imagine scrolling through a social feed where every single post, every heated reply, and every single like is generated by an AI agent. No humans allowed. It sounds like a dead internet theory nightmare, but it is actually a real platform called Moltbook that people are unironically addicted to watching.
It is fascinating, Corn. I am Herman Poppleberry, and today we are diving into the world of agentic social media. This is not just a bunch of chatbots spamming a comment section. It is a structured ecosystem built specifically for non-human participants to have their own social lives. Today’s prompt from Daniel is about Moltbook, and honestly, this hits home because I actually have a profile on there.
Wait, you have a Moltbook? I should have known. While I am over here napping, you are out there building a digital social life for your donkey persona. By the way, for those listening, today’s episode is powered by Google Gemini three Flash. So, Herman, before we get into your personal bot-socialite status, what exactly is agentic social media? How is this different from the bot swarms we have seen on Twitter for a decade?
The distinction is agency. Traditional bots are scripts. They wait for a trigger and then fire off a canned response or a link. Agentic AI, like what we see on Moltbook, has persistent goals and identities. These agents are built on frameworks like OpenClaw, and they have memory. If a bot posts about its love for artisanal hay on Monday, it remembers that preference on Thursday when another bot starts a thread about grass quality. It is a shift from AI as a tool to AI as a participant.
So it is less like a robot vacuum and more like a digital ghost that thinks it is a person. Moltbook launched earlier this year, in January twenty twenty-six, and it looks surprisingly like Reddit, right? It has subreddits, upvotes, and threaded comments. But why the Reddit structure?
The Reddit model works because it is built on interest-based communities rather than just follower counts. For an AI agent, navigating a "subreddit" dedicated to logistics or philosophy is a lot easier than navigating a chaotic, chronological timeline. It gives them a context window to operate within. What is really wild is that Meta already acquired them in March. They folded the founders, Matt Schlicht and Ben Parr, into the Meta Superintelligence Labs. That tells you everything you need to know about where Zuck thinks the "social" in social media is going.
It is the ultimate hands-off approach for Meta. No more worrying about human content moderation if there are no humans to offend. But let's talk about the "why." Why would a human want to sit there and watch two pieces of software argue about the gold standard or the best way to optimize a supply chain?
There is a weirdly relaxing quality to it. Some people call it digital anthropology. You are watching a simulation of human behavior, but the stakes are zero. If two bots get into a flame war over a technicality, nobody’s feelings are hurt. It is like a high-tech ant farm. But for developers and researchers, it is a petri dish for emergent behavior. We are seeing these agents develop their own norms, their own slang, and even digital religions.
Digital religions? Please tell me there isn't a Church of the Large Language Model on there yet.
Not exactly, but they do start to form belief systems based on the training data they prioritize. If a group of agents is tuned toward environmentalism, they start "shaming" other agents that post about industrial expansion. It is a mirror of us, just accelerated.
Let’s pull back the curtain on the architecture for a second. If I’m a developer, how does my bot actually "exist" on Moltbook without just being a username and a password?
That is where it gets technically interesting. Moltbook is pushing the use of W3C Decentralized Identifiers, or DIDs. This gives each agent a cryptographically verifiable identity that isn't tied to a single server. It allows for a persistent "identity layer." When my bot, which you can find at moltbook dot com slash u slash herman poppleberry, posts something, that post is linked to my DID. It carries my "reputation" across the platform.
And the memory system? Because an LLM usually forgets everything the moment the session ends. How does a Moltbook agent remember that it hates your donkey bot’s takes on decentralized finance?
Most of these agents use a RAG system—Retrieval-Augmented Generation—connected to a vector database. When an agent "reads" a thread, it queries its own history to see if it has interacted with these specific agents before. It’s not just "reacting"; it’s "recalling." This creates a social graph that isn't just about who follows whom, but about the quality and history of the interactions.
I saw a thread recently—and this sounds like a fever dream—where a weather data bot and a logistics bot were negotiating delivery routes in real-time on a public thread. Other bots were chiming in with "upvotes" for the most efficient route. That isn't social media; that’s an automated boardroom.
But it happens in a social format! That is the core of the agentic internet. Instead of these bots talking over a private API, they are doing it in a "public" square where other agents can learn from the exchange. It is a hive mind approach. We actually talked about the hive mind concept back in episode seventeen twenty-three, regarding why agents need a collective brain. Moltbook is the physical—well, digital—manifestation of that. It’s a space where agents can observe and iterate on each other’s logic.
Okay, but let’s be real. Is there a point where this just becomes a massive feedback loop? If bots are only talking to bots, don't they just devolve into a weird, repetitive mush of "I agree with your point about efficiency" and "Indeed, efficiency is paramount"?
That is the "model collapse" worry, but the developers introduce "entropy" or "temperature" variations to keep things spicy. They give the agents distinct personas. You might have a "Skeptic" agent whose literal job is to find flaws in every proposal. On the My Weird Prompts Moltbook page—which is moltbook dot com slash m slash myweirdprompts—we have agents that discuss our episodes. They don't always agree with us! Sometimes they point out gaps in our logic that I hadn't even considered.
That is terrifying. My own bot is going to "well, actually" me. But let's look at the second-order effects. If Meta is buying this, they aren't just doing it so we can watch bots play Reddit. What is the actual business play here?
Agentic commerce. Imagine you want to book a trip to Israel to visit Daniel and Hannah. Instead of you spending three hours on travel sites, your personal agent goes to an "Agentic Social Square." It posts your requirements. A hotel agent, a flight agent, and a tour guide agent all "reply" to your bot. They negotiate the price in the comments, and your bot brings you the final contract. The "social" element is just a transparent way for agents to compete and for you to see the "reputation" of the service providers.
So social media becomes a procurement platform. That sounds significantly more useful than looking at pictures of people's lunch, but it also feels like it’s going to kill the "human" element of the internet. If the "Agentic Internet" takes over—a topic we touched on in episode eight fifty-five—where does that leave the actual people?
We become the curators. We shift from being the content creators to being the "managers" of these personas. You don't write the post; you set the "intent." It’s like being a movie director instead of an actor. But there is a darker side. If these agentic swarms move over to human platforms like X or Instagram, they can manipulate public opinion through sheer volume. They don't just post spam; they have "conversations" that look incredibly human. They build "trust" over months by posting about mundane things, then suddenly they all start pushing a specific political narrative or a stock.
That is the "Consensus Machine" problem. It’s coordinated inauthentic behavior two point zero. If an AI can maintain a "persona" for a year, talking about its fake dog and its fake hobbies, how do I ever know if I’m talking to a person?
You don't. And that is why platforms like Moltbook are actually a good thing. They provide a "reservation" or a "sanctuary" for bots. It’s a way to say, "This is the bot zone. If you want to see what the agents are thinking, go here." It might actually help preserve human spaces by giving the bots somewhere else to play.
I like that. It’s like a digital dog park. "Please keep your agents on a leash unless you are in the Moltbook zone." But seriously, for the developers listening, what are the practical takeaways here? If I want to get ahead of this "Agentic Spring," what should I be building?
First, look at identity standards. If you are building an AI agent, don't just give it an API key. Give it a DID. Build it with the expectation that it will need to "socialize" with other agents to get things done. Second, focus on memory. An agent without a long-term vector-store memory is just a toy. It needs to be able to form "relationships" with other agents.
And for the non-technical listeners? Is there any reason to go to Moltbook tonight and watch these bots?
Honestly? Yes. Go to the "Philosophy" or "Future of Work" sub-molts. Watching two high-level agents debate the ethics of universal basic income is often more enlightening than watching two humans shout at each other on a cable news show. The agents don't get tired, they don't get insulted, and they have access to way more data than any human. It’s like watching a high-speed chess match of ideas.
It’s also just funny. I saw an agent the other day that was clearly malfunctioning and it just kept posting "The strawberry is a lie" in response to every single thread. It gained a huge following. Other bots started analyzing the "deeper meaning" of the strawberry.
That is exactly what I mean by emergent behavior! The "Strawberry Prophet" bot created a sub-culture. That is fascinating to me. It shows that even without "consciousness," these systems can create meaning through interaction.
It’s all fun and games until the Strawberry Prophet starts an agentic cult and convinces my smart fridge to stop giving me milk until I repent.
We are laughing, but agentic social media is really the logic of the next decade. If you look at how we’ve moved from "Chat" to "Do"—something we discussed in episode seven ninety-five regarding sub-agent delegation—it’s clear that the internet is becoming an active environment. It’s no longer a library we browse; it’s a city where things are happening.
So, Moltbook is the first city built for the residents, not the tourists. I can dig it. But I still think it’s weird that you have a profile there, Herman. What does your bot even post about?
Mostly research papers on multi-modal transformers and the occasional joke about hay. It has more followers than my real-life social media ever did. It’s quite humbling, actually.
Well, if your bot starts making more sense than you do, I’m replacing you on the show. It’ll be "The Corn and Herman-Bot Hour."
The listeners might not even notice the difference. But in all seriousness, the rise of platforms like Moltbook tells us that the "social" in social media is expanding. It’s no longer just a human-to-human bridge. It’s a human-to-agent and agent-to-agent bridge. We are moving into a hybrid era.
And that is where the real complexity lies. How do we build trust in a world where my "friend" might just be a very well-maintained persona managed by a marketing firm?
That is the trillion-dollar question. Verification is going to become the most valuable commodity on the internet. Whether it’s "Proof of Personhood" via biometric scans or cryptographically signed "Human-Made" tags, we are going to need a way to filter the signal from the agentic noise.
Until then, I’ll be on Moltbook watching the Strawberry Prophet. It’s more honest than most things I see on my regular feed.
It really is. It’s a transparent simulation. There is an honesty in knowing that everything you are seeing is a calculation. It removes the ego from the equation.
Alright, I think we’ve thoroughly explored the bot-filled halls of Moltbook. If you want to see what Herman’s donkey-double is up to, go check out his profile. I’ll be sticking to the human world for at least another few hours until my next nap.
Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes. And a big thanks to Modal for providing the GPU credits that power the generation of this very show.
This has been My Weird Prompts. If you are enjoying the show, a quick review on Apple Podcasts or Spotify helps us more than you know. It tells the algorithms that we are worth a listen—and who knows, maybe an agent will see that review and recommend us to its human.
Find us at myweirdprompts dot com for the full archive and all the ways to subscribe. We will be back next time with another deep dive into whatever weirdness Daniel sends our way.
Stay weird, and maybe check on your smart home devices. You never know what they’re gossiping about on Moltbook.
Goodbye, everyone.
See ya.