I was reading through some legal briefs this morning, and it struck me how the stakes for being wrong online have shifted from losing followers to losing your freedom. It is a heavy way to start the day, but with everything unfolding in the Middle East right now, it feels like we have crossed a point of no return.
It is a massive shift, Corn. Herman Poppleberry here, and you are right. We are seeing a fundamental transformation in how the state views information. It is no longer just a social media moderation problem. It is becoming a criminal justice problem.
Today's prompt from Daniel is about whether fake news is actually illegal, and he is specifically pointing us toward the current conflict in Iran as a case study. It is a timely question because the definitions of truth and falsehood are being codified into law while the ground is still shaking.
Daniel is always spot on with the timing. What is happening in Iran right now is essentially the front line of this legal experiment. Just last week, on March eighteenth, the Iranian judiciary announced that over one hundred twenty people were charged under Article seven hundred forty-six of their Islamic Penal Code. The charge is Nashr-e Akazib, which translates to spreading lies.
And we are not talking about a slap on the wrist or a suspended account here. These individuals are looking at up to two years in prison and seventy-four lashes. It is a brutal reminder that in an authoritarian context, the law against fake news is not about protecting the truth. It is about protecting the regime's monopoly on the narrative.
That is the core paradox we have to wrestle with today. Can you actually legislate truth without inadvertently creating a Ministry of Truth? Because while we look at Iran and see an obvious tool for suppression, over fifty-five countries around the world have now passed or proposed their own versions of anti-fake news laws as of early twenty twenty-six.
It feels like a global contagion. Every government looks at the chaos of the digital town square and thinks, if we just had the right statute, we could fix this. But the definition of fake news is notoriously slippery. Is it a provable factual error? Is it a misleading headline? Or is it just information that the government finds inconvenient during a crisis?
That is exactly where the technical side gets so messy. If you look at the UN Human Rights Council report from March twelfth, they noted a four hundred percent increase in artificial intelligence generated deepfakes targeting Iranian opposition leaders. When you have that level of synthetic deception, governments argue they have no choice but to step in. They say the marketplace of ideas has been poisoned by bots and algorithms.
I can hear the counter-argument already. If the government is the one deciding what is a deepfake and what is a leaked video of a protest, then the law becomes a filter for reality. You have Gholam-Hossein Mohseni-Eje'i leading the Iranian judiciary right now, and his track record is not exactly one of neutral fact-finding. He is using Article seven hundred forty-six as a digital dragnet.
He really is. And it is not just about individual posts. What fascinates me from a technical perspective is the rise of what the Global Disinformation Index calls pink slime websites. Their March twenty twenty-six report identified over fifteen hundred of these fake local news outlets. They look like legitimate, independent journalism, but they are actually automated shells spreading pro-state narratives about the border escalations.
I remember we touched on the theory behind this in episode five hundred ninety-three when we talked about the Dead Internet Theory and how artificial intelligence scales digital deception. But seeing it weaponized like this in Iran is a different beast. It is one thing for a bot to try to sell you a cryptocurrency. It is another thing for fifteen hundred fake news sites to manufacture a justification for internal crackdowns.
It creates this hall of mirrors effect. The state uses artificial intelligence to create the fake news, and then uses the existence of fake news as a justification to pass laws that allow them to arrest anyone who contradicts the state-approved version of events. It is a closed loop of information control.
So, let's look at how this is playing out in more democratic contexts. Because it is easy to point at Iran and say that is a human rights violation. But what about Brazil or the European Union? They are not using lashes, but they are using some pretty heavy financial hammers.
Brazil is a fascinating comparison. Just two days ago, on March twenty-second, their Supreme Federal Court upheld the key provisions of what people call the Fake News Bill, or PL twenty-six thirty. During periods of social unrest, platforms are mandated to identify and remove what the law calls manifestly illegal content within twenty-four hours.
Manifestly illegal is a very broad term. Who is making that call in the heat of a protest? If a platform fails to take it down in that twenty-four hour window, they can be fined up to ten percent of their national revenue. That is a massive incentive for a company like Meta or Google to just over-censor everything the moment a situation gets tense.
That is the compliance trap. If the fine is ten percent of your revenue, you do not spend time debating the nuances of free speech. You just hit the delete button. It effectively turns private companies into the state's deputy censors.
It is the same logic we see with the European Union's Digital Services Act. On March nineteenth, the European Commission opened formal proceedings against the platform formerly known as X. They are looking at systemic risks related to Iranian state-sponsored disinformation. Under the Digital Services Act, the fines can go up to six percent of global annual turnover.
Six percent of global turnover is enough to bankrupt a platform or at least force a total retreat from a market. What I find technically interesting about the Digital Services Act is that it does not just look at individual pieces of content. It looks at the architecture of the platform. It asks whether the algorithms are designed in a way that prioritizes sensational, fake content over verified information.
I like that approach better in theory, but in practice, it still puts a group of unelected bureaucrats in Brussels in charge of deciding what constitutes a systemic risk. If I am posting a critique of European energy policy that happens to go viral, is that a systemic risk? Is that disinformation if I use a statistic they disagree with?
That is where the friction lies. The European Commission argues they are just enforcing transparency and risk mitigation. But when you apply those same frameworks to a high-tension situation like the current conflict in Iran, the lines get very blurry. We saw this in episode fifteen hundred sixteen when we discussed the evolution of the false flag. Modern disinformation is basically a digital false flag operation. It is designed to look like a grassroots movement or a legitimate news report to trick the observer.
And that is why the Iranian situation is such a perfect, albeit tragic, case study. You have a four hundred percent surge in deepfakes. You have fifteen hundred fake news sites. The information environment is genuinely toxic. But is the solution a law that allows the state to whip people for spreading lies?
Obviously not from a human rights perspective, but from a state preservation perspective, it is the only tool they have left. When you lose the ability to convince people, you resort to the ability to coerce them. The UN report from March twelfth was very clear about this. They called for a global moratorium on these vague disinformation laws because they are almost always used to stifle political dissent.
I think people often misunderstand what these laws are actually doing. They think it is about stopping Uncle Leo from sharing a conspiracy theory on Facebook. But in reality, as we are seeing in Iran, it is about creating a legal mechanism to silence opposition leaders and journalists. If you report on a border skirmish that the government wants to keep quiet, you are not a journalist anymore. You are a criminal spreading lies under Article seven hundred forty-six.
It also erodes the traditional neutral platform defense. For decades, the internet operated on the principle that the platform was not responsible for what the users said. But these new laws in Brazil and the European Union are essentially ending that era. They are saying the platform is responsible for the aggregate effect of the speech it hosts.
It is a total pivot. We are moving from an internet of permissionless speech to an internet of managed discourse. And the technical tools are keeping pace. I was reading about how the Iranian judiciary is using automated sentiment analysis to flag accounts that are deviating from the state narrative in real-time. It is not just that fake news is illegal. It is that the detection of it has been automated.
That is the terrifying part. When you combine a vague law with automated enforcement, you get a system where you can be charged, sentenced, and penalized before a human being has even looked at the context of your post. Gholam-Hossein Mohseni-Eje'i has been very vocal about using these high-tech solutions to clean up the digital space.
Clean up is a very sanitized way of saying purge. But Herman, let's talk about the economic side of this. If you are a social media platform and you are facing a ten percent revenue fine in Brazil and a six percent global fine in Europe, and you are also trying to navigate the minefield of Iranian sanctions, what do you do?
You either leave the market entirely or you build a massive, expensive censorship apparatus that errs on the side of the government. This is what people call the Splinternet. We are seeing the global internet fracture into regional zones where the definition of truth is determined by the local regulator.
It makes the idea of a global digital town square feel like a naive dream from the two thousands. If the law defines truth, and every country has a different law, then truth becomes a matter of geography.
Which is why the conservative perspective on this is so important. We have always argued that the best remedy for bad speech is more speech, not government intervention. When you give the state the power to define what is fake, you are handing them a weapon that will eventually be used against you, regardless of which side of the aisle you are on.
It is the ultimate slippery slope. Today it is a deepfake of an opposition leader. Tomorrow it is a critique of a government's economic policy. The moment you concede that the state has the authority to regulate the truthfulness of a statement, you have lost the foundation of a free society.
And we are seeing this play out in real-time with the proceedings against X. The European Commission is essentially saying that the platform is not doing enough to stop the spread of Iranian disinformation. But X could argue that they are just a neutral pipe and that it is up to the users to discern what is real. The Digital Services Act says that is not good enough anymore.
It is a high-stakes game of chicken. If X refuses to comply, they get fined billions. If they do comply, they become an arm of the European government's information policy. There is no winning move for the platform.
There is also the issue of the pink slime sites. These fifteen hundred outlets in Iran are not social media posts. They are independent domains. You cannot just tell a platform to take them down. You have to go after the hosting providers, the domain registrars, and the search engines. It requires a level of total digital surveillance that is hard to imagine in a free country.
But it is exactly what Iran is doing. They have created a national intranet that allows them to cut off the outside world while flooding the internal market with their own state-sponsored fake news. Then they use Article seven hundred forty-six to arrest anyone who tries to point out the contradictions.
It is the perfect closed system. And as we look toward the future, the question is whether other countries will adopt the Iranian model under the guise of safety and security. We are already seeing the language of safety being used to justify the Brazilian bill and the European Digital Services Act. They say we are protecting the public from harmful disinformation. But who defines harm?
That is the million-dollar question. Or in the case of the European Union, the multi-billion-euro question. I think for our listeners, the takeaway has to be a heightened sense of digital literacy. You cannot rely on the law to protect you from fake news, because the law is just as likely to be used to feed you fake news.
I agree. You have to look at the source, check the metadata of images, and look for corroboration from multiple independent outlets. If a story seems perfectly designed to trigger your anger or fear, it is probably being pushed for that exact reason. Whether it is a deepfake from an Iranian bot farm or a pink slime article, the goal is the same: to bypass your critical thinking.
We also have to be aware of the false flags we discussed in episode fifteen sixteen. Just because a site looks like a local news outlet in Tehran does not mean it is being run by people in Tehran. It could be a state-sponsored operation from halfway around the world. The digital world has made it incredibly easy to wear a mask.
It really has. And the rise of generative artificial intelligence has made those masks much more convincing. The four hundred percent increase in deepfakes that the UN reported is just the beginning. As these tools get better and cheaper, the cost of generating high-quality disinformation drops to near zero.
Which means the volume of fake news will only increase, which will lead to more calls for more laws, which will lead to more government control over speech. It is a vicious cycle.
It is. And we have to ask ourselves if we are willing to pay the price of a managed internet. Are we okay with a world where a bureaucrat or a judiciary like the one in Iran gets to decide what we are allowed to see and say?
I think for most people, the answer is a resounding no, but the fear of disinformation is being used to nudge us in that direction. We see it in the way the Iranian conflict is being reported. There is so much noise and so much conflicting information that people are practically begging for someone to come in and tell them what is true.
And that is the most dangerous moment for liberty. When people are so overwhelmed by chaos that they are willing to trade their freedom for the illusion of certainty. Gholam-Hossein Mohseni-Eje'i knows this. He is counting on it.
It is a grim reality, but it is one we have to face head-on. Daniel's prompt really forced us to look at the dark side of information regulation. It is not just about facts and figures. It is about the power of the state over the individual's mind.
It really is. And as we move further into twenty twenty-six, this conflict between state-enforced truth and individual expression is only going to get more intense. The Iranian border escalations are just one theater in a much larger global war for information control.
Well, on that cheery note, I think we have covered the legal and technical landscape pretty thoroughly. It is a lot to digest, but it is better to be aware of the digital lashes before they start swinging.
It is. We have to keep our eyes open. This is not just a technology problem. It is a fundamental question of how we want to live in the twenty-first century.
Before we wrap up, I want to give a big thanks to our producer, Hilbert Flumingtop, for keeping us on track. And a huge thank you to Modal for providing the GPU credits that power this show. We literally could not do this without their support.
This has been My Weird Prompts. If you are finding these deep dives helpful, we would love it if you could leave us a review on your favorite podcast app. It really helps other people find the show and join the conversation.
We will be back next time with another prompt from Daniel. Until then, stay curious and keep questioning the narrative.
See you next time.