Imagine you have a beautifully configured Home Assistant instance running in a Docker container. You have it exposed via a Cloudflare Tunnel so you can toggle your lights from the train, and you feel pretty secure because, hey, no open ports on the router, right? But then the nightmare scenario hits. An attacker finds a zero-day in a custom integration you installed, or they phish a credential. They are in. They have shell access to that container. Now, the million-dollar question: Can they reach your media server? Your NAS? Can they sniff the traffic on your entire home network or start pivoting to your work laptop?
That is the blast radius problem in a nutshell, and it is something we see ignored far too often in the self-hosting community. People treat the tunnel like a magical shield, but once that shield is pierced, the internal landscape is often a playground for lateral movement. By the way, quick heads-up for the listeners, today’s episode of My Weird Prompts is actually being powered by Google Gemini one point five Flash.
I hope Gemini knows its way around a kernel namespace, because Daniel’s prompt today is a deep one. He’s asking us to look past the "just use a VPN" or "just use a tunnel" advice and actually get into the weeds of Linux isolation. How do we sandbox these services—especially tricky ones like home automation that actually need to talk to other things like MQTT brokers—without giving them the keys to the kingdom?
It is a great challenge from Daniel. Herman Poppleberry here, ready to dive into some documentation. The core of this is moving from a perimeter security mindset to a zero trust micro-segmentation mindset, even in a home lab or a small business setup. We have to assume the container will be compromised and then build walls inside the host to ensure that the compromise stays as a minor nuisance rather than a total catastrophe.
Before we get into the heavy Linux internals, let’s define that blast radius for the self-hoster. If I am running an MQTT broker, a Zigbee-to-MQTT bridge, Home Assistant, and maybe a Plex server, all on one big Linux box or a cluster of Raspberry Pis, what does a breach actually look like if I haven't done my homework?
If you’re just running standard Docker Compose with default settings, a breach of one container often means the attacker has a starting point to scan your entire internal subnet. They see every other container on that Docker bridge. They might even be able to see your host’s internal IP and start hitting SSH ports or unauthenticated management interfaces for your router. Lateral movement isn't just about hacking another service; it's often just about using the legitimate, wide-open network paths you left behind because it was easier to set up that way.
Right, the "it just works" tax. You want Home Assistant to talk to the broker, so you put them on the same network, but you also end up putting your database and your file browser on that same network because you were lazy with your YAML file. Suddenly, the light switch app has a direct line to your tax returns.
That is the reality of it. And Daniel mentioned something crucial: it’s not just about networking. It’s about direct service-to-service connections. If Home Assistant has a long-lived access token to your MQTT broker, and that broker is also used by your security cameras or your front door lock, then compromising Home Assistant gives the attacker full control over the broker. They don't even need to hack the broker; they just use the stolen credentials you gave the first service.
But wait, how deep does that rabbit hole go? If they get that MQTT token, can they actually spoof messages? Like, could they send a door unlocked command that looks like it came from my phone?
They certainly could. If you haven't implemented Access Control Lists—ACLs—the broker just sees a valid token and says "Sure, I'll execute that." This is the danger of flat authentication. It’s like giving a valet the keys to your entire house just because they need to park your car. Once they have the key, the house is theirs. In a home automation context, an attacker could theoretically cycle your smart plugs until they overheat or just sit quietly and monitor your motion detected events to figure out exactly when you aren't home.
That is a chilling thought. It turns your convenience into a surveillance tool for the bad guys. So, let’s talk about the walls we can build. You’re always talking about namespaces. I know they’re the foundation of containers, but I think most people just treat Docker as the unit of isolation. What’s actually happening under the hood in Linux that we should be leveraging?
This is where it gets fascinating. Docker is really just a wrapper around several Linux kernel features, primarily namespaces and cgroups. Namespaces are the magic that makes a process believe it’s the only thing running on the system. There are seven or eight main ones now. You have the PID namespace, which means a process in a container can’t see processes in other containers or on the host. If an attacker runs a process list command, they only see their own compromised process, not your backup script or your SSH daemon.
But wait, if they have their own network stack, how do they talk to the host or each other?
Through virtual ethernet pairs—veth pairs. One end is in the container’s namespace, the other is attached to a bridge on the host. But here is the thing: by default, Docker puts everything on one big bridge. If you want to reduce the blast radius, you need to be surgical with your Network namespaces. But beyond networking, people often forget the Mount namespace. This is what keeps a container from seeing your host’s root filesystem. If you've ever done a volume mount of the Docker socket—slash var slash run slash docker dot sock—into a container, you have effectively deleted your Mount namespace isolation. An attacker in that container can now talk to the Docker daemon and start new containers with root access to your entire hard drive.
I see that docker dot sock mount in so many tutorials for things like Portainer or Watchtower. Are you saying those are inherently dangerous?
They are high-risk. Think of the Docker socket as the God Mode API for your server. If you mount it into a container that is exposed to the internet, you are one vulnerability away from losing the whole machine. If you must use it, you should look into something like a socket proxy that filters the commands, or better yet, use a tool that doesn't require it, like Podman, which is daemonless and handles things differently.
That sounds like a blast radius the size of a small moon. So, rule number one: don't mount the Docker socket unless you absolutely have to, and if you do, you’d better have a very good reason and some extra layers of protection. What about User namespaces? I hear those are the gold standard for security but people find them hard to set up.
User namespaces are vital because they allow a process to be root inside the container but map to a non-privileged user on the host. If an attacker breaks out of the container but you're using User namespaces, they land on your host as nobody or some random high-UID user with no permissions. Without it, if they find a kernel exploit and break out, they are root on your actual hardware. It’s one of those things that used to be a pain in the neck to configure, but modern runtimes like Podman or even Docker with the right daemon config make it much more accessible now.
Okay, so we’ve got namespaces for visibility isolation. What about resource isolation? I’m thinking about a denial of service attack where a compromised service just eats all my RAM and crashes the whole server.
That’s where cgroups—control groups—come in, specifically cgroups v2 which is standard in modern kernels like six point two and above. Cgroups allow you to put a hard ceiling on CPU, memory, and I/O. If your home automation container starts a crypto-miner after being hacked, cgroups will ensure it only gets, say, ten percent of one CPU core and two hundred megs of RAM. It keeps the rest of your system alive so you can actually log in and kill the compromised process. Without cgroups, one bad container can take down the host through resource exhaustion.
I’ve seen those oom-kill messages in logs before. It’s not pretty. Now, let’s get into the permission layer. Namespaces hide things, cgroups limit resources, but what about what the process is actually allowed to do with the kernel? I’m thinking about seccomp and AppArmor.
Now we’re talking. This is the Mandatory Access Control layer. Seccomp, or Secure Computing mode, is a kernel feature that filters system calls. Think of it as a firewall for the kernel API. A typical Linux kernel has over three hundred system calls. A web server or a home automation app probably only needs about forty or fifty of them to function.
So why give them the other two hundred and fifty?
Precisely. Docker’s default seccomp profile is actually pretty good—it blocks about forty-four of the most dangerous calls—but it still leaves over two hundred exposed. If an attacker wants to use a specific kernel exploit that requires a niche system call, and you’ve blocked that call with a custom seccomp profile, the exploit just fails. It’s a very powerful way to harden a sandboxed service.
And AppArmor? Is that the same thing?
Similar goal, different mechanism. AppArmor and SELinux are about path-based and label-based permissions. AppArmor lets you say, "This process can only read from this specific folder and can never execute anything in slash tmp." This is huge for preventing lateral movement. If an attacker downloads a script to slash tmp and tries to run it, AppArmor blocks the execution because that behavior isn't in the profile for that service.
It sounds like we’re building a series of concentric circles around the service. But Daniel’s specific point was about the lateral movement through direct connections. Let’s look at the MQTT example. Home Assistant needs to talk to the broker. If I use a standard Docker network, and the broker is exposed to Home Assistant, how do I stop an attacker from using that connection to mess with my Zigbee devices or my smart locks?
This is where we move from Linux internals to architectural patterns. The first step is Network Segmentation at the Docker level. You shouldn't have one home-automation network. You should have a frontend network for the UI, a backend network for the database, and a broker network specifically for the MQTT traffic.
But doesn't that just move the problem? Home Assistant still has to be on the broker network to talk to it.
Yes, but you combine that with Identity-First Access. In the old days, we just trusted anything on the local network. In twenty twenty-six, that is a recipe for disaster. For MQTT, you should be using per-client credentials and, more importantly, ACLs—Access Control Lists. You don't give Home Assistant a root account for the broker. You give it a user that can only publish to the lights topic and subscribe to the sensors topic. If that container is hacked, the attacker can't suddenly send a factory reset command to your smart lock if the broker’s ACLs forbid that user from talking to the locks topic.
That makes a lot of sense. It’s the Principle of Least Privilege applied to the data layer. But what about the network level? Can we use Linux tools to enforce that only MQTT traffic is allowed between these two containers, even if they are on the same virtual network?
We can. You can use nftables or the older iptables on the host to inspect traffic between containers. But a more modern way to do this is using something like Cilium or other eBPF-based networking tools. eBPF is a massive trend right now because it allows you to run sandboxed programs inside the kernel to monitor and filter network traffic with almost zero overhead. With eBPF, you can write a rule that says, "The Home Assistant container can only talk to the MQTT container on port eighteen-eighty-three, and only using the TCP protocol." If the attacker tries to use that network path to scan for a web interface on the broker’s container, the kernel just drops the packets.
I remember we talked about eBPF in a previous context, and it really is like a superpower for Linux admins. It’s like having a programmable firewall that lives inside the brain of the OS. But for someone sitting at home with a simple Docker Compose file, is eBPF overkill? What’s the low-hanging fruit for them?
The low-hanging fruit is definitely Docker Networks and internal flags. Many people don't realize you can mark a Docker network as internal: true, which means containers on that network have no access to the outside world—no internet, no host network. They can only talk to each other. If you have a database container, it should be on an internal-only network with your app. Period. The database has no reason to talk to the internet. If an attacker gets into the app, they can talk to the database, but they can't exfiltrate your data to a server in Eastern Europe because the database container has no egress path.
Does that internal flag stop the container from even pinging the gateway?
It essentially cuts the cord to the bridge’s exit point. The container can still see its neighbors on that specific virtual wire, but it’s blind and deaf to the rest of the world. It’s a very clean way to ensure that a compromised database can’t phone home to a command-and-control server.
That’s a great tip. Internal networks are such an easy win. What about Cloudflare Tunnels specifically? Daniel mentioned them as the door. Is there a way to harden the tunnel itself?
Yes, and this is where the Shadow Tunneling threat Daniel mentioned in his notes comes in. Attackers are starting to use these tunnels against us. They’ll get into a system and then start their own outbound tunnel to bypass your firewall. To prevent this, you need strict egress filtering. Your server should only be allowed to talk to Cloudflare’s specific IP ranges. If another process tries to open a tunnel to some random ngrok clone, your firewall should bark.
It’s funny how the very tools we use for convenience—like tunnels that bypass NAT—are the exact same tools an attacker loves to use to maintain persistence. It’s a double-edged sword. Let’s talk about lateral movement through the filesystem. We touched on the Docker socket, but what about shared volumes? I see a lot of people sharing a config folder between multiple containers.
That is a classic mistake. If your media server and your download client share the same config parent folder, and the download client gets popped, the attacker can now modify the config files or even the startup scripts of your media server. You should always use the most granular volume mounts possible. Don't mount slash data. Mount slash data slash downloads. And whenever possible, use read-only mounts. If a container only needs to read a config file and never write to it, mount it as r-o. It’s a simple flag in your Compose file that prevents an attacker from overwriting your config with malicious code.
Read-only is another one of those simple things that people skip because they’re in a rush. I’m guilty of it too. You just want the dashboard to show up, so you give it all the permissions. But if we’re talking about minimizing the blast radius, these tiny decisions are the bricks in the wall. Herman, what about monitoring? If I’ve built all these walls, how do I know if someone is currently banging on them?
This is where detection meets isolation. In the Linux world, tools like Falco are incredible. Falco uses kernel modules or eBPF to watch for suspicious behavior based on a set of rules. For example, if a container that is supposed to be a Home Assistant instance suddenly starts s-h or bash and tries to write to slash etc, Falco will fire an alert. It’s essentially an Intrusion Detection System for containers. If you’re serious about sandboxing, you need something that tells you when the sandbox is being probed.
Is Falco something a beginner can handle, or is it strictly for the sysadmin gurus?
It has a learning curve, but the default rule set is actually very usable. It’ll catch things like "a shell was spawned inside a container" or "a sensitive file was read by an unexpected process." Even if you don't know how to write your own rules yet, just having those alerts piped to your phone can give you that "hey, something is wrong" moment before the attacker can do real damage.
So, let’s bring this back to Daniel’s home automation example. He’s got Home Assistant, an MQTT broker, maybe some smart plugs. Let’s walk through the ultimate isolation setup for that.
Okay, here is the blueprint. First, Home Assistant runs in its own User namespace, so it’s not root on the host. It’s in a Network namespace that only has two connections: one to the Cloudflare Tunnel container—via a very specific, isolated Docker network—and one to the MQTT broker container on a separate broker-only network.
Okay, so it’s a middle-man with two separate network cards, basically.
Right. Next, we apply a custom seccomp profile that blocks execve—preventing it from running new programs—and socket calls it doesn't need. We mount its config folder as read-only for everything except the specific d-b file it needs to write to. Then, at the MQTT level, we use a non-default port, enforce TLS—even internally—and set up an ACL that says "The user H-A-User can only see topics starting with home slash state."
And if I’m the attacker, and I get into that Home Assistant container?
You’re stuck. You can’t see the host's files. You can’t see the other containers on the network because there is no ping or nmap in the container and you can’t download them because the network has no egress to the internet. You can’t even mess with the smart locks because the MQTT broker won’t let your user publish to the lock topic. Your blast radius is confined to a single, crippled container with no way out.
That sounds like a very lonely place for an attacker. I love it. But let’s be real, Herman, this sounds like a lot of work for a guy who just wants his porch lights to turn on at sunset. Is there a middle ground?
There is. The middle ground is Network Segmentation and Least Privilege. If you do nothing else, just stop using the default Docker bridge. Create separate networks for your public-facing stuff and your private stuff. And for the love of all that is holy, use different passwords for every service. Lateral movement is often just an attacker finding a password in one config file and realizing it works for every other service you run.
The one password to rule them all strategy. It’s the ultimate gift to a hacker. I want to go back to something Daniel mentioned in his notes—the idea of Unauthorized Tunneling. Cloudflare’s threat report for twenty twenty-six says over fifty percent of ransomware attacks started with credential theft. If someone steals my Cloudflare Zero Trust credentials, they don't even need to hack my Linux box. They just log in as me.
That’s the Identity layer of the blast radius. This is why you should always use Multi-Factor Authentication—MFA—on your Cloudflare dashboard, and even better, use Access Policies. You can set a rule that says, "To access my Home Assistant tunnel, you must not only have the password but you must also be connecting from a specific country or have a specific hardware security key."
It’s layers, all the way down. I think people often get overwhelmed and do nothing, but Daniel’s prompt reminds us that even home systems are targets now. Attackers are using automated factories to scan for these tunnels. If you’re exposing a machine, you are on the front lines, whether you like it or not.
And the Shadow Tunneling point is so important. Imagine an attacker gets a foothold on your media server. They don't try to hack your router. They just run a tiny cloudflared or ngrok binary that creates an outbound connection to their own Cloudflare account. Now they have a permanent, encrypted tunnel into your house that bypasses your firewall entirely. To a casual observer, it just looks like H-T-T-P-S traffic to Cloudflare.
That’s terrifying. How do you stop that on a Linux box?
Egress filtering. You have to be as strict about what leaves your server as what enters it. In a perfect world, your containers should have no internet access unless they explicitly need it to fetch updates. And even then, you can use a DNS Filter like Pi-hole or AdGuard Home to block known tunneling service domains. If your Zigbee-to-MQTT container tries to resolve tunnel dot something dot com, and your DNS says no, the attack is stalled.
Wait, I use Pi-hole! I never thought of it as a security tool for my containers, I just wanted to block ads on my TV.
It’s one of the best poor man's firewalls out there. If you point your Docker daemon’s DNS settings to your Pi-hole, you can see every single domain your containers are trying to talk to. If your Weather Underground integration starts trying to resolve bitcoin-pool-dot-xyz, you’ve got a problem. It gives you visibility into the blast radius that most people completely lack.
It’s a bit of a cat-and-mouse game, but the tools in Linux—if you actually use them—are incredibly robust. I think the takeaway for me is that isolation isn't a checkbox; it's a design philosophy. You’re trying to make every service an island.
And those islands can have bridges, but the bridges should have guards and checkpoints. Don't just build a land bridge because it’s faster to walk across. Daniel’s interest in this is spot on for twenty twenty-six, especially as we see more Nation-State actors using these same residential proxy networks to hide their tracks. They want to hide in your home network because your IP address looks trusted to the rest of the world.
Trusted until it starts launching d-dos attacks or sending spam. I’ve seen some people suggesting Micro-VMs like Firecracker as a step up from Docker for isolation. What’s your take on that for a home setup?
Firecracker is amazing—it’s what powers AWS Lambda. It gives you the speed of a container with the isolation of a full Virtual Machine. It uses the KVM hypervisor to create a very minimal VM in about a hundred milliseconds. If you are running something really untrusted—like a script you found on a random forum or a service that is known to be buggy—running it in a Micro-VM is the gold standard. It provides a much harder hardware-level boundary than namespaces do.
But I’m guessing it’s not as easy as docker-compose up.
Not yet, though projects like Kata Containers are trying to make it that simple by allowing you to run O-C-I-compliant containers inside these Micro-VMs. It’s definitely the direction the industry is heading for high-risk workloads. But for most of us, just getting our User namespaces and Network segmentation right would put us in the top one percent of secure home-hosters.
Let’s talk practical takeaways for the folks listening. If they’ve got a weekend and want to harden their setup, where should they start?
Step one: Audit your Docker Compose files. Look for privileged: true, network_mode: host, or mounts to the Docker socket. If you see those, ask yourself if you really need them. Most of the time, you don't. For instance, Home Assistant often asks for host network mode to discover devices. You can often replace that with a m-D-N-S reflector or just by manually adding IP addresses.
Host mode basically gives the container the keys to your entire network interface. It's the ultimate lazy configuration.
Step two: Move your services onto dedicated Docker networks. Put your public stuff in one, your private stuff in another, and only connect them where absolutely necessary. Step three: Add internal: true to any network that doesn't need to talk to the internet. If your database only needs to talk to your app, put them on an internal network. Step four: Look into seccomp and AppArmor. Even just using the default profiles is good, but you can find hardened profiles online for common services like Nginx or Home Assistant.
And step five?
Monitoring. Install something like c-top or glances to keep an eye on resource usage, and if you're feeling adventurous, set up Falco or even just a simple log-aggregator like Loki to look for permission denied errors. A sudden spike in permission denied often means an attacker—or a very confused script—is trying to do something it’s not allowed to do.
I love the permission denied as an early warning system. It means your walls are working! Before we wrap up, Herman, what’s the one weird trick or the deep cut for Linux security that most people miss?
The no-new-privileges flag. It’s a simple line you can add to your Docker config: security_opt: no-new-privileges colon true. This prevents a process from ever gaining more permissions than it started with—even if it finds a setuid binary or a vulnerability that would normally allow it to escalate to root. It’s a tiny bit of text that can stop a privilege escalation attack dead in its tracks.
Does it break anything? Like, if a service needs to escalate for a legitimate reason?
Occasionally, yes. Some very complex services that spawn sub-processes with different U-I-Ds might complain. But for ninety-five percent of the stuff we run—web servers, databases, automation tools—it’s a free security win. If a service breaks when you turn it on, it's actually a great learning moment to figure out why it needed those extra privileges in the first place.
That is exactly the kind of deep cut we love. It’s simple, it’s effective, and it leverages the core power of the Linux kernel. Daniel, thanks for the prompt—this was a great excuse to poke around the concentric circles of security.
It really was. And I think the big takeaway is that while the threats are getting more industrialized, as Cloudflare puts it, the defensive tools are also becoming more accessible. You don't need a million-dollar enterprise budget to build a very secure sandbox in your living room. You just need a bit of curiosity and the willingness to read the man pages for namespaces.
Speak for yourself, Herman. I’ll stick to reading the summaries. But in all seriousness, security is a process, not a destination. Start with one service, harden it, and then move to the next. Don't try to do it all in one night or you’ll just end up with a broken smart home and a very frustrated family.
True. Secure but broken is a very short-lived state in most households. Balance is key. If your family can't turn on the lights because you've over-sandboxed the MQTT broker, they're going to start asking you to just turn all that security stuff off.
And that's when the real danger starts. The "I'll just fix it later" temporary bypass that becomes permanent.
Build your walls, but make sure the doors still work for the people who live there.
Well, that's our deep dive into the blast radius. If you’re enjoying the show, a quick review on your podcast app—Spotify, Apple, wherever—really helps us reach more curious minds.
Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning. And a big thanks to Modal for providing the GPU credits that power this whole generation pipeline.
This has been My Weird Prompts. You can find us at myweirdprompts dot com for all the links and the RSS feed.
Stay curious, and keep those namespaces tight.
See you next time.