Hey everyone, welcome back to My Weird Prompts. It is Friday, February twenty-seventh, twenty twenty-six, and I am Corn. I am sitting here in the studio with my brother and resident deep-diver, Herman.
Herman Poppleberry, at your service. It is great to be back in the studio, Corn. I have been looking forward to this session because the topic is incredibly dense, legally complex, and, frankly, more urgent than ever given the state of global discourse over the last couple of years.
It really is. Today’s prompt comes from Daniel, and it tackles one of the most difficult balancing acts in modern governance: the boundary between free speech and hate speech. Daniel is asking us to look at how different democratic societies try to balance the fundamental right to express oneself with the right of vulnerable groups not to be subjected to racism, intimidation, and targeted hatred.
And Daniel was very specific with his examples. He pointed to the discourse surrounding the Belfast rap group Kneecap in the United Kingdom, the aftermath of that horrific stabbing incident at Bondi Junction in Australia back in twenty twenty-four, and the ongoing, very heated debates in Ireland regarding their incitement to hatred laws. It is a lot to unpack, especially when you consider how the speed of fiber-optic vitriol can turn into physical violence on the street in a matter of hours.
Before we get into the heavy legal theory, I wanted to send a quick shout-out to Daniel and Hannah in Jerusalem. We hope everything is going well over there. I imagine little Ezra is probably crawling all over the place by now, being about seven months old. It is a heavy topic to think about when you have a young family, but that is exactly why these conversations about societal safety and the limits of expression are so vital. We are building the world they are going to grow up in.
The safety of the community and the freedom to speak are often presented as a zero-sum game, but in a functional democracy, they have to find a way to coexist. Let us start by framing the fundamental tension. On one side, you have the classic liberal defense of free speech, often associated with John Stuart Mill. This suggests that the best way to combat bad or even dangerous ideas is with more speech, not censorship. This is the "safety valve" theory—the idea that it is better to let people vent their frustrations and engage in debate than to drive those sentiments underground where they can ferment into something much more explosive.
Right, that is the high-level theory we all learn in school. But on the other side, we have the lived reality of the twenty-twenties. We know that certain types of speech are not just offensive; they are functional. They are tools used to dehumanize. They create an environment where violence is normalized or even encouraged. When speech targets someone’s core identity—their religion, their race, or their ethnicity—it acts as a precursor to physical harm. We have seen this repeatedly where online radicalization leads to real-world attacks, from Buffalo to Christchurch to the riots we saw across the United Kingdom in the summer of twenty twenty-four.
And that is where the legal lines get drawn, but those lines are drawn very differently depending on which side of the Atlantic you are standing on. The United States is really the global outlier among Western democracies because of the First Amendment. In the United States, the Supreme Court has set a bar so high it is almost atmospheric. For speech to be restricted by the government, it has to meet the "Brandenburg Test" from the nineteen sixty-nine case Brandenburg versus Ohio. The speech must be "directed to inciting or producing imminent lawless action" and be "likely to incite or produce such action."
"Imminent lawless action" is the key phrase there. It means that in the United States, I can stand on a soapbox and say the most vile, racist things imaginable, and as long as I am not telling a specific crowd to go attack a specific person right this second, the government generally cannot touch me.
Precisely. The American system prioritizes the individual’s right to speak over the collective’s right to be free from offense or psychological harm. There is a deep-seated distrust of the state there; the belief is that if you give the government the power to decide what is "hate speech," that power will eventually be used to silence political dissent. We saw a bit of this tension in the twenty twenty-four Supreme Court case Murthy versus Missouri, which dealt with how much the government can pressure social media companies to moderate content.
But when you look at Europe, the perspective is almost the polar opposite, largely because of the scars of the twentieth century.
You hit the nail on the head. Most European countries, having lived through the horrors of the Holocaust and the rise of fascism, have a much more restrictive view. They see hate speech not just as a nuisance, but as a direct threat to the democratic order itself. In Germany, they have the concept of "Streitbare Demokratie" or "militant democracy." It means the democracy must be prepared to defend itself against those who would use democratic freedoms to destroy democracy.
That is why Germany has such strict laws against "Volksverhetzung," right?
Yes, incitement of hatred. This includes denying the Holocaust or displaying Nazi symbols. In Germany, and increasingly across the European Union with the full implementation of the Digital Services Act, they do not see these restrictions as a violation of free speech. They see them as a necessary defense of human dignity, which is the very first article of the German constitution. They argue that if you allow hate speech to flourish, you are effectively silencing the victims of that speech, who then become too intimidated to participate in public life.
Daniel mentioned Ireland specifically, noting that they have laws against inciting racial hatred but that they are rarely applied. He is referring to the Prohibition of Incitement to Hatred Act of nineteen eighty-nine. I looked into the data on this, and it is startling. Between nineteen eighty-nine and twenty twenty-four, there were only a handful of successful prosecutions. The reason is that the law requires the prosecution to prove "intent" to stir up hatred, which is a massive legal hurdle.
It is a massive hurdle, and that is why Ireland became a flashpoint for this debate recently. After the Dublin riots in November twenty twenty-three—which were fueled by online misinformation after a stabbing incident—the Irish government tried to push through the Criminal Justice Incitement to Violence or Hatred and Hate Offences Bill. It was incredibly controversial. Critics, including people like Elon Musk and various civil liberties groups, argued that the definition of "hatred" was too vague and that the provision criminalizing the "possession of material" likely to incite hatred was a step toward "thought crime."
I remember that. The backlash was so intense that by late twenty twenty-four, the Irish government actually had to pivot. They ended up stripping the "incitement to hatred" elements out of the bill to focus solely on "hate crimes"—meaning harsher sentences for existing crimes like assault if they were motivated by prejudice—just to get the legislation through. It shows just how protective people are of the right to speak, even in a country that has seen the direct consequences of online incitement.
It is a perfect example of the "chilling effect" argument. If the law is too broad, people stop talking about sensitive topics like immigration or religion because they are afraid of a knock on the door. But then you have the counter-argument from groups like the Irish Council for Civil Liberties, who pointed out that the nineteen eighty-nine law was written before the internet existed. It simply was not built for an era where a single viral post can mobilize a mob in thirty minutes.
That brings us to the Kneecap example Daniel mentioned. For those who might not know, Kneecap is a hip-hop trio from West Belfast. They rap in a mix of Irish and English, and their brand is... well, it is intentionally provocative. They use Republican imagery, they are openly critical of the British state, and their lyrics often walk a very fine line.
The controversy Daniel is referring to happened in early twenty twenty-four. The United Kingdom government’s Department for Communities blocked a funding grant for the band that had been approved by an independent panel. The government basically said they did not want to spend taxpayer money on a group that "opposes the United Kingdom."
But Kneecap fought back in court, right?
They did. And they won. The High Court in Belfast eventually ruled that the decision to block the grant was politically motivated and therefore unlawful. This is a fascinating case because it sits right at the intersection of "hate speech" and "political expression." To some in the Unionist community in Northern Ireland, Kneecap’s lyrics and imagery feel like an incitement to violence or at least a form of sectarian intimidation. But to the band and their supporters, it is a legitimate expression of their identity and their political frustration with the history of British rule.
It highlights the danger of letting the government decide what is "acceptable" speech. If the government can pull funding or silence an artist because their politics are "unpalatable," then you do not really have free speech; you have state-sanctioned speech.
In the United Kingdom, they have the Public Order Act of nineteen eighty-six, which prohibits words or behavior intended to stir up racial hatred. But applying that to art is notoriously difficult. The law generally protects "artistic expression" unless it is a very clear and direct call to violence. But as Daniel noted, the line gets even blurrier when you talk about support for "proscribed organizations."
Right, like the debates over Hamas. Hamas is a designated terrorist organization in the United Kingdom. Under the Terrorism Act of two thousand, it is a criminal offense to wear clothing or carry items that arouse "reasonable suspicion" that you are a supporter of a proscribed group.
And this is where the "River to the Sea" slogan comes in. During the massive protests in London and across the United Kingdom in twenty twenty-four and twenty twenty-five, there was a huge debate about whether that slogan constitutes hate speech. Some Jewish groups and politicians argue it is a coded call for the erasure of Israel and therefore antisemitic and intimidating. Pro-Palestinian activists argue it is a call for equal rights and liberation. The Metropolitan Police were put in an impossible position: do you arrest someone for a slogan that has multiple interpretations?
It is the ultimate "eye of the beholder" problem. If the law requires "incitement" to be the standard, you have to prove what the speaker meant and how the audience received it. In a polarized society, that is a recipe for endless legal battles.
It really is. Now, let us look at the Australian context Daniel brought up, because the Bondi Beach incident in April twenty twenty-four is a textbook case of how the "online-to-offline" pipeline works. To be clear, the actual stabbing at Bondi Junction was a horrific act committed by a man with a long history of mental health issues. There was no political or religious motive found by the police.
But that is not what the internet said in the first hour.
Not at all. Within minutes, the "digital machinery of hate," as some researchers call it, kicked into high gear. High-profile accounts on X—formerly Twitter—and Telegram started claiming the attacker was a "radical Islamist" or a "recently arrived refugee." One young Jewish student was even falsely identified as the killer, leading to him being hounded by a global mob before he could even process what was happening.
And the Australian government’s reaction was quite aggressive, wasn't it?
It was. The Australian eSafety Commissioner, Julie Inman Grant, issued notices to platforms like X to remove videos of the attack, arguing they were "Class one" material that could incite further violence or cause extreme distress. This led to a massive legal standoff with Elon Musk, who called the commissioner an "Australian censorship commissar."
Musk’s argument was that an Australian official shouldn't be able to dictate what the rest of the world sees on the internet.
Right, the "global takedown" order. But from the Australian perspective, they saw how that misinformation was being used to stir up anti-immigrant sentiment. We saw the exact same thing happen in the United Kingdom a few months later after the Southport stabbings. A lie about the attacker’s identity spread on social media, and within forty-eight hours, people were attacking mosques and hotels housing asylum seekers. This is what we call "stochastic terrorism."
Can you explain that term? We’ve used it before, but it’s worth a refresher.
Stochastic terrorism is the use of mass media or social media to demonize a group so consistently that it becomes statistically probable—though not predictable—that someone will eventually act on that rhetoric with violence. The person who posts the hate speech doesn't have to give a direct order. They just have to "prime the pump." They create a climate of fear and loathing where a "lone wolf" feels justified in attacking.
It is like a slow-acting poison. If I tell you every day for a year that your neighbor is a monster who wants to hurt your children, I might not be telling you to go hit him with a rock. But if you eventually do, did my speech play a role? Under American law, I am probably innocent. Under European or Australian law, I might be looking at an incitement charge.
And this is where the "Right to live without fear" comes in. This is a concept that is gaining a lot of traction in international law. The argument is that if a minority group—be it Muslims in Australia, Jews in the United Kingdom, or immigrants in Ireland—is so intimidated by online and offline vitriol that they stop going to certain neighborhoods, stop wearing their religious symbols, or stop speaking their language in public, then their freedom has been stolen. Their "freedom of movement" and "freedom of expression" are being curtailed by the "freedom of speech" of the harasser.
It is a conflict of liberties. It is not just "Me versus the Government." It is "My right to speak versus Your right to exist in peace."
Canada has a really interesting way of handling this. They use what is called the "Oakes Test" to determine if a limit on a right is "reasonable." The Canadian Charter of Rights and Freedoms says that rights are subject to "such reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society."
That sounds like a lot of lawyer-speak for "it depends."
It does! But it gives the courts a framework. In the famous nineteen ninety case Regina versus Keegstra, the Canadian Supreme Court upheld the conviction of a high school teacher who was teaching his students that the Holocaust was a hoax and that Jewish people were "evil." The court ruled that the harm caused by his speech—the damage to the students and the target group—outweighed his individual right to express those specific views in a classroom setting. They argued that hate speech "contributes little to the marketplace of ideas" and actually "hinders the quest for truth" by silencing others.
I like that framing. It challenges the idea that all speech is equally valuable in the "marketplace." If you are selling poison in the marketplace and calling it juice, the market isn't working; it's being sabotaged.
That is a great analogy. And as we move further into twenty twenty-six, the "poison" is getting harder to detect because of generative AI. We are now in an era where you can create a high-definition deepfake of a politician or a religious leader saying something incredibly inflammatory. Imagine a deepfake of a local Imam in Sydney supposedly calling for a riot, or a deepfake of a politician in Dublin using a racial slur. By the time the "fact-check" comes out three hours later, the building might already be on fire.
That is the "Dead Internet Theory" coming to life. If we cannot trust what we see or hear, the entire basis of "informed debate" collapses. How do you write a law for that?
It is incredibly difficult. Some countries are looking at "provenance" laws—requiring AI-generated content to be watermarked. But bad actors aren't going to follow those rules. This is why the European Union’s Digital Services Act is so focused on the "systems" of the platforms. They are telling companies like Meta, TikTok, and X: "We don't just care about the individual posts. We care about your algorithms. If your algorithm is specifically boosting 'high-engagement' hate speech because it makes you more money, you are liable for the societal damage."
It is moving the responsibility from the "speaker" to the "distributor."
Precisely. It is treating social media companies more like broadcasters or publishers and less like "neutral pipes." If a newspaper prints a call to genocide, they get sued. For twenty years, social media companies were exempt from that in the United States under Section two hundred and thirty of the Communications Decency Act. But the rest of the world is moving on from that model.
So, let’s look at the practical takeaways for Daniel and for all of us. If the law is a blunt instrument and the technology is a firehose of misinformation, what is the "weird prompt" solution here?
I think it starts with a concept we discussed back in episode seven hundred and fifty—the "Humanization Project." One of the most effective ways to combat the "silencing effect" of hate speech is to amplify the counter-narratives. Not just "fact-checking," which can be dry and boring, but "human-checking." When the Bondi Beach misinformation started, the most powerful response wasn't a government press release; it was the stories of the "ordinary heroes" who stepped in to help, regardless of their background.
I also think we need to talk about "Media Literacy" as a form of national defense. We teach kids how to cross the street and how to avoid strangers, but we don't teach them how to identify a "bot-driven outrage cycle." If you see a post that makes you feel an immediate, burning sense of rage against a specific group of people, that is a red flag. You are being "hacked."
That is a great way to put it. We are being emotionally hacked. Another takeaway is the importance of "Transparency Laws." We need to know who is paying for the speech we see. In the United Kingdom, there has been a lot of push for more transparency around "dark ads" and the funding of fringe political groups that use hate speech as a recruitment tool. If you know that a "grassroots" anti-immigrant protest in Ireland was actually being boosted by a troll farm in a different country, it changes how you perceive the "will of the people."
And for the lawmakers, the challenge is to avoid the "Irish Trap"—writing laws so broad they become a political football. The best hate speech laws are those that are narrowly tailored to "incitement to violence" and "targeted harassment" rather than "offense." Being offended is part of living in a pluralistic society. Being intimidated into silence or targeted for violence is not.
Well said. We have to protect the "right to be wrong" and the "right to be annoying" while fiercely defending the "right to be safe." It is a moving target. As societies become more diverse and technology more immersive, the friction at the borders of free speech is only going to increase.
It’s a heavy burden for people living in places like Jerusalem, where Daniel is. In a city with so much history and so many overlapping identities, every word carries a thousand years of weight. The "line" there isn't just a legal abstraction; it's a physical reality.
It really is. And I want to thank Daniel for sending this in. It forced us to look at the global map of how we talk to—and about—each other. Whether it is a rap group in Belfast, a tragedy in Sydney, or a legislative debate in Dublin, the core question is the same: How do we live together without erasing each other?
If you want to dive deeper into the history of how these laws came to be, I highly recommend checking out episode seven hundred and ten, where we talked about Raphael Lemkin and the legal definition of genocide. It explains why the international community decided that "incitement" was a crime that needed to be stopped before the violence starts.
And if you are interested in the psychological side of this—why our brains are so susceptible to "us versus them" rhetoric—episode seven hundred and fifty on the "Mechanisms of Division" is a great companion piece to this one.
Before we go, a huge thank you to all of our listeners. If you find these deep dives helpful, please leave us a review on Apple Podcasts or Spotify. It really does help the show grow and allows us to keep tackling these complex prompts.
You can find our full archive and the contact form for your own prompts at myweirdprompts.com. You can also email us directly at show at myweirdprompts.com. We read every single one, even if it takes us a while to get to them on air.
And a quick credit to Suno for our show music. It is a perfect example of the AI technology we were just talking about—amazing potential, but it definitely changes the landscape of creativity.
It certainly does. Well, that is it for us today. Thank you for listening to My Weird Prompts. We will see you in the next one.
Stay curious, and stay kind.
Goodbye everyone.