This episode answers three operational questions about self-hosting with Tailscale: exit node safety, hairpin routing, and custom DNS. On exit nodes, traffic from a laptop in Tokyo truly routes through a home machine in Jerusalem, appearing as the home IP to any website. The safety model is deliberate — you must advertise the capability on the device, approve it in the admin console, and select it on your client. No ports are opened, and only authenticated tailnet members can use it. The attack surface is your tailnet authentication itself, not an exposed gateway. For performance, same-country routing adds under 20ms of latency, while cross-continent routes add 200-300ms — noticeable but usable. Tailscale avoids hairpinning by establishing direct peer-to-peer WireGuard tunnels between devices on the same local network, keeping traffic entirely local. DERP relays are a fallback only when direct connections fail. For custom DNS, Tailscale's MagicDNS integrates with the tailnet, and you can override nameservers per-device or use the global DNS configuration page to point to your own resolver without Cloudflare.
#2695: Self-Hosting Tailscale Exit Nodes Safely
How to safely route traffic through your home from anywhere using Tailscale exit nodes — without exposing your network.
Episode Details
- Episode ID
- MWP-2856
- Published
- Duration
- 37:27
- Audio
- Direct link
- Pipeline
- V5
- TTS Engine
-
chatterbox-regular - Script Writing Agent
- deepseek-v4-pro
- Topics
- networkingvpnprivacy
AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.
Downloads
Transcript (TXT)
Plain text transcript file
Transcript (PDF)
Formatted PDF with styling
Never miss an episode
New episodes drop daily — subscribe on your favorite platform
New to the show? Start here#2695: Self-Hosting Tailscale Exit Nodes Safely
Daniel sent us this one — and it's basically three concrete questions wrapped in a larger observation about how self-hosting security has evolved. He's been through the Cloudflare learning curve, got Tailscale running, and now he's wondering about exit nodes without accidentally exposing his network, whether Tailscale's routing is smart enough to avoid hair pinning, and if you can use custom DNS without Cloudflare. It's a practical, operational set of questions.
They're good ones, because this is exactly where people get stuck. They set up the tailnet, they add their devices, and then they stare at it thinking, okay, now what do I actually do with this? There's a gap between installation and utility that the documentation doesn't always bridge well.
By the way, today's episode is powered by DeepSeek V four Pro.
Oh nice — welcome to the show, DeepSeek.
Let's start with exit nodes, because Daniel's framing of this is exactly right. He's asking, how do you do it safely, and does your traffic really flow through your house from the other side of the world? The short answer to that second question is yes, it absolutely does. If you're in a cafe in Tokyo and you've got an exit node running on a machine in your living room in Jerusalem, your web traffic is encrypted through the tailnet, it hits your home machine, and it exits onto the public internet from your home IP address. To any website you visit, you appear to be at home.
The wild thing is how seamless it is. You toggle it on in the Tailscale client, and suddenly your public IP is your home IP. No VPN configuration files, no manual routing tables, no OpenVPN profiles you have to email to yourself. It's a single toggle. But Daniel's real question is about the safety concern, and I think that's where we need to be precise about what an exit node actually does and doesn't do.
Right, because the fear is, if I set this up wrong, am I punching a hole in my network that some Russian cybercriminal is going to crawl through?
The answer is no, and here's why. An exit node is not the same thing as opening a port on your router. When you designate a machine as an exit node, what you're doing is telling Tailscale that this device is authorized to forward traffic from other devices on your tailnet out to the internet. The key word there is authorized. Only devices that are already authenticated members of your tailnet can use it. It's not a public gateway. It's not listening on an exposed port. The control plane handles all of the coordination, and the traffic between your laptop and the exit node is encrypted via WireGuard.
The attack surface is, someone would need to first compromise your tailnet itself. They'd need to get a device authenticated onto your network. At that point, whether you have an exit node or not, you've got bigger problems.
And Tailscale's authentication model is tied to identity — it's not just a pre-shared key you paste into a config file. You're authenticating with Google, GitHub, Microsoft, or whatever identity provider you've configured. So the security model is, you trust the devices on your tailnet because you control which identities can add devices. The exit node doesn't weaken that model.
Now, there is one thing to be aware of, and it's more of a performance consideration than a security one. When you set up an exit node, you should think about which machine you're designating. Don't pick the Raspberry Pi three that's already struggling to run Home Assistant and Pi-hole. Pick something with a little headroom. The encryption and decryption isn't computationally trivial, especially if you're routing a lot of traffic through it.
You also want it to be a machine that's always on. There's nothing more annoying than being on the road, toggling your exit node, and realizing the machine you designated went to sleep three hours ago. I'd recommend using a small always-on device — could be a Raspberry Pi four or five, could be an old Intel NUC, could be a dedicated VM on a home server. Something that you know will be awake and connected.
Daniel mentioned a really specific use case that's worth unpacking, because it's not the one most people think of. The typical VPN use case is privacy — you want to appear to be somewhere you're not. But he's talking about the inverse. He's in Israel, where some sites are geo-restricted or his authentication is tied to his home IP. He wants to appear to be at home when he's not. That's a legitimate, non-nefarious reason to want your traffic to flow through your house.
It's more common than people realize. You've got banking sites that flag logins from unexpected locations. You've got streaming services with regional licensing. You've got internal work tools that are IP-whitelisted. Being able to say, I'm always connecting from my home IP, regardless of where I physically am, that solves a real problem. And it's not just about access — it's about reducing friction. You don't get the security challenge emails. You don't get the please verify your identity prompts. You just work.
There's also a use case Daniel didn't mention but that I think is worth flagging. If you're on public WiFi — airport, hotel, cafe — routing through your exit node means all of your traffic is encrypted between your device and your home network. The public WiFi operator sees nothing but encrypted WireGuard packets. They can't inspect your DNS queries, they can't inject ads, they can't track what sites you're visiting.
That's a big one. Public WiFi is a surveillance swamp, and an exit node is a clean way to tunnel through it without paying for a commercial VPN service that you may or may not trust. You're your own VPN provider, essentially.
Let's talk about how to actually set this up safely, step by step, because Daniel asked for that specifically. Step one, install Tailscale on the machine you want to be your exit node. Same process as any other device — download the client, authenticate, it joins your tailnet. Step two, you need to advertise the exit node capability. This is a deliberate step. You don't just toggle something in the GUI. You run a command on the device.
The command is tailscale up dash dash advertise dash exit dash node. That tells the control plane, this device is willing to be an exit node. But critically, that's just advertising the capability. It doesn't automatically route traffic. You haven't actually enabled anything yet.
This is the part where the safety concern gets addressed. After you run that command, you go to the Tailscale admin console, find the device, and you'll see it's now showing an exit node badge. But no traffic is routing through it yet. You have to explicitly approve it in the admin console. Then, on your client device — your laptop, your phone — you go into the Tailscale menu, you select use exit node, and you pick the device you just approved. That's three deliberate actions: advertise, approve in console, select on client. You can't accidentally expose anything.
You can revoke it instantly from the admin console. If you ever think something is off, you click one button and the exit node capability is gone. The device is still on your tailnet, it's still accessible, but it no longer routes traffic. The blast radius of a mistake is effectively zero.
One thing I'd add, and this is more of a network hygiene point. If you're going to run an exit node, make sure the machine itself is reasonably secured. Keep it updated. Don't run a bunch of random services on it. The exit node isn't exposing those services to the internet, but if someone compromises that machine through some other vector, they'd have a foothold on your tailnet. So treat it like you'd treat any device that sits at a network boundary.
That's good advice. And I'd also say, don't make your primary workstation your exit node. It's tempting because it's always on and it's powerful, but your workstation is where you open attachments, click links, install software. It's the device most likely to encounter something nasty. Use a dedicated device or a server that isn't used for day-to-day browsing.
To answer Daniel's first question directly: yes, your traffic literally flows through your house from anywhere in the world, yes it's safe if you follow the three-step approval process, and no you're not exposing ports or creating a public gateway. The risk surface is your tailnet authentication, which you already need to trust.
Let's talk about performance, because that's the natural follow-up. What's the latency like? If you're in Tokyo and your exit node is in Jerusalem, you're adding roughly two hundred to three hundred milliseconds of round-trip time. That's noticeable for interactive web browsing but not unusable. For streaming, once the buffer fills, you won't notice it. For anything that's not latency-sensitive, it's fine.
If you're in the same country, maybe a city or two away, the overhead is often under twenty milliseconds. You genuinely won't feel it. And if you're on the same local network, Tailscale is smart enough to route directly — which actually brings us to Daniel's second question about hair pinning.
Right, and this is where Tailscale really shines compared to a traditional VPN or a Cloudflare tunnel. Daniel's asking, if he's at home on his laptop, and he's trying to reach a service on his home server, and he's using the tailnet address, does the traffic go out to the internet and come back, or does it stay local?
The answer is, Tailscale uses something called direct peer-to-peer connections. When two devices on your tailnet need to communicate, Tailscale's coordination server — they call it the control plane — helps them establish a direct WireGuard tunnel between each other. The control plane is involved in the handshake, but the actual data traffic never passes through Tailscale's servers. It goes directly from device to device.
Here's the key part for the hair pinning question. Tailscale doesn't just blindly route everything through some central relay. It probes to see if a direct connection is possible. If both devices are on the same local network, Tailscale will discover that and establish the WireGuard tunnel directly over the local network. Your traffic goes from your laptop to your server over your home WiFi or Ethernet. It never leaves your house. It never touches the internet.
This is the NAT traversal magic that WireGuard enables. Tailscale uses STUN and ICE protocols — the same stuff that WebRTC uses for peer-to-peer video calls — to figure out the best path between two devices. If they're behind the same NAT, which is what a home network is, the path is direct. Zero additional latency beyond whatever your local network already has.
Tailscale has a feature called DERP — Designated Encrypted Relay for Packets — which is the fallback when a direct connection can't be established. If both devices are behind restrictive NATs that won't allow a direct peer-to-peer connection, the traffic relays through a DERP server. Tailscale runs public DERP servers around the world, and you can also run your own custom DERP server if you want to keep relay traffic within your own infrastructure.
— and this is the important but — DERP is the fallback, not the default. Tailscale will always try direct first. And on a home network, direct will succeed essentially one hundred percent of the time. So Daniel's concern about hair pinning is valid in theory, but Tailscale's architecture is designed specifically to avoid it.
I've tested this. I've got a server running on my home network, and when I'm on the same network and I SSH to it using the tailnet address, the latency is sub-millisecond. It's indistinguishable from using the local IP address. If the traffic were going out to the internet and back, I'd see at least a few milliseconds. It's clearly staying local.
For Daniel's deployment workflow question — should he use the tailnet address in his agent skill instead of conditional logic based on whether he's at home — the answer is yes, absolutely. The tailnet address is stable, it works from anywhere, and when he's at home it routes locally. He doesn't need the if-at-home-use-local-IP logic. He can just use the tailnet address and it'll always take the optimal path.
That's the real operational win of Tailscale. You stop thinking about network topology. You don't need to know whether you're on the same subnet or behind a NAT or connected via VPN. Every device on your tailnet has a stable IP address in the one hundred dot sixty-four dot zero dot zero slash ten range — that's the CGNAT range Tailscale uses — and that address is the same whether you're at home, at a coffee shop, or on another continent. Your scripts and configurations don't need to change based on location.
There's a subtlety here worth mentioning. The one hundred dot sixty-four address is the tailnet IP, and that's what you'd use for service-to-service communication within the tailnet. But Tailscale also gives you a DNS name for each device — it's usually hostname dot tailnet-name dot ts dot net. That resolves to the tailnet IP. So you can use either the IP or the DNS name in your configurations, and both will route optimally.
MagicDNS is the feature that makes this work. When you enable MagicDNS in the Tailscale admin console, every device on your tailnet gets a fully qualified domain name. You don't have to remember one hundred dot sixty-four dot something dot something. You just use the hostname. And MagicDNS handles resolution, so even when you're using the DNS name, the traffic still takes the direct peer-to-peer path when possible.
Which is a nice segue into Daniel's third question about custom DNS. He's asking, can he use a domain he owns — like danielsmart home dot com — and have it resolve to a service on his tailnet, without involving Cloudflare?
The honest answer is, it depends on what exactly you're trying to do, and there's a spectrum of approaches. Let me break it down. The simplest case is, you want a custom domain that only you and people on your tailnet can access. If you're the only user, or you've shared your tailnet with family members, you can absolutely do this entirely within Tailscale.
Tailscale supports custom DNS records through something called split DNS. In the admin console, under the DNS section, you can define custom nameservers for specific domains. So you could say, for danielsmart home dot com, use this internal DNS server. Then you'd run a DNS server on your tailnet — something like CoreDNS or Pi-hole — that's configured to resolve danielsmart home dot com to the tailnet IP of your service. When anyone on your tailnet tries to access that domain, MagicDNS routes the query to your internal DNS server, which returns the tailnet IP, and the connection happens directly over the tailnet.
That's elegant. No Cloudflare involved at all. The domain never needs to exist on the public internet.
Right, but there's a catch. This only works for devices on your tailnet. If you want someone who is not on your tailnet to access danielsmart home dot com, you need public DNS. And that's where Cloudflare or another public DNS provider comes in. You'd register the domain, point it at Cloudflare's nameservers, and then create DNS records. But what do those records point to?
If the service is only accessible via Tailscale, a public DNS record pointing to a tailnet IP is useless to anyone not on the tailnet. The one hundred dot sixty-four address isn't publicly routable.
So you have a few options. Option one, you use Cloudflare Tunnels to expose the service publicly, and then the domain points to Cloudflare's infrastructure. That's the Cloudflare-plus-Tailscale hybrid model Daniel mentioned. Option two, you use Tailscale Funnel, which is a feature that lets you expose a service on your tailnet to the public internet through Tailscale's infrastructure. You run a command like tailscale funnel port number, and Tailscale gives you a public URL. But that URL is going to be something dot tailscale dot com, not your custom domain.
Option three is what I'd call the reverse proxy approach. You run a reverse proxy on a machine that's on your tailnet — something like Caddy or Nginx — and you point your public DNS at that machine's tailnet Funnel URL or at a Cloudflare Tunnel endpoint. The reverse proxy handles routing to different services based on the hostname. You get your custom domain, and the backend services are only accessible via the tailnet. But you're still using something public-facing to get the traffic into the tailnet.
The short answer is, if you want a fully private setup where your custom domain only works for tailnet members, you can do that entirely within Tailscale using split DNS and an internal DNS server. No Cloudflare required. If you want the domain to work for anyone on the public internet, you need some kind of public-facing gateway, and that's where Cloudflare or Tailscale Funnel enters the picture.
I think that's actually a reasonable architecture. There's nothing wrong with using both Cloudflare and Tailscale. They solve different problems. Cloudflare handles public-facing traffic, DDoS protection, caching, all the edge stuff. Tailscale handles the secure internal mesh between your devices. They complement each other.
Daniel mentioned that he finds himself needing both, and I think that's not a failure of understanding — it's actually the correct design for a lot of setups. The question is whether you can reduce the Cloudflare dependency if you want to, and the answer is, for internal-only services, yes. For anything public, you probably still want something sitting at the edge.
Let me add one more thing about the split DNS approach, because it's powerful and under-documented. You can use it to override public DNS for tailnet members. So let's say you do have danielsmart home dot com pointed at Cloudflare for the general public. But for devices on your tailnet, you want that domain to resolve directly to the tailnet IP so you get the direct peer-to-peer path and avoid hair pinning through Cloudflare. You can do that. You configure split DNS to say, for danielsmart home dot com, use my internal DNS server. Your internal DNS server returns the tailnet IP. Now anyone on the tailnet gets the direct path, and anyone off the tailnet goes through Cloudflare. Best of both worlds.
That's actually the answer to Daniel's hair pinning concern about Cloudflare. He mentioned that avoiding hair pinning with Cloudflare requires router-level configuration. With Tailscale's split DNS, you don't need to touch the router. You just configure the DNS override in the admin console, and tailnet members automatically get the local route.
This is why I think Tailscale's approach to DNS is one of its most underrated features. MagicDNS plus split DNS gives you a level of control over name resolution that would normally require running your own DNS infrastructure with conditional forwarding rules and split-horizon configurations. Tailscale makes it a few clicks in the admin console.
Let's talk about the self-hosted question briefly, because Daniel mentioned he'd like to park the full self-hosted discussion for later. But it's worth noting that Tailscale offers something called Headscale, which is an open-source reimplementation of the Tailscale control plane. With Headscale, you run the coordination server yourself. You're not relying on Tailscale's infrastructure at all.
Right, and Headscale is a great project. It's maintained by a developer named Juan Font and a community of contributors. It implements the same coordination protocol, so you can use the standard Tailscale clients — they just point at your Headscale server instead of Tailscale's control plane. Your devices still establish direct peer-to-peer connections. The DERP relays can be self-hosted too.
The trade-off is operational complexity. You're now responsible for the availability and security of the coordination server. If it goes down, your tailnet can't establish new connections. Existing connections continue to work because they're peer-to-peer, but you can't add devices or re-authenticate. So you need to think about redundancy, backups, monitoring.
You lose some of the nice things Tailscale's hosted control plane provides. The admin console UI. The automatic ACL management. The integration with identity providers. With Headscale, you're managing ACLs as JSON configuration files. It's not impossible, but it's more work. For most home users, the hosted Tailscale is free for up to three users and one hundred devices, which is more than enough. Headscale makes sense if you have a philosophical commitment to self-hosting everything, or if you have compliance requirements that prevent you from using a third-party control plane.
Or if you're just the kind of person who enjoys running infrastructure. Which, given that Daniel is asking about deploying agent skills to self-hosted resources, he might be that kind of person.
And if you are that kind of person, Headscale is a well-documented, actively maintained project. But I'd recommend getting comfortable with Tailscale's hosted version first, understanding the concepts, and then deciding whether the self-hosted control plane is worth the operational burden.
One thing I want to circle back to, because Daniel mentioned it and I think it's the emotional core of his prompt. He talked about the anxiety of messing something up and exposing his network. And I think that anxiety is real and valid, but it's worth naming explicitly: Tailscale's design philosophy is that you shouldn't have to think about network security at the port and firewall level. The security boundary is identity. You authenticate a device. You define ACLs that say what that device can access. You don't open ports. You don't configure firewall rules. You don't worry about whether you accidentally exposed port twenty-two to the internet because you misconfigured a Docker container.
The ACL system is worth explaining here, because it's the mechanism that enforces this. In the Tailscale admin console, you define rules in a JSON-like syntax. The default rule is, every device on the tailnet can talk to every other device. But you can lock that down. You can say, my laptop can access my home server on port four four three, and my phone can access nothing except the exit node, and my IoT devices can only talk to the MQTT broker on port eighteen eighty-three. And these rules are enforced at the node level, so even if someone compromises a device, they can't use it to pivot to other things on your tailnet beyond what the ACLs allow.
This is the answer to the question Daniel didn't quite ask but was clearly thinking about. How do I experiment with these features without accidentally blowing a hole in my security? The answer is, start with restrictive ACLs. When you set up an exit node, you can write an ACL that says only my laptop and my phone can use this exit node. If you somehow messed up the exit node configuration, the blast radius is those two devices. Your home server, your IoT network, your NAS — they're not affected.
You can also use tags to group devices and write ACLs that reference those tags. So instead of writing rules for individual devices, you create a tag called exit dash node dash users, you apply it to your laptop and phone, and then the ACL says, devices with tag exit dash node dash users can use exit nodes with tag home dash exit dash nodes. It's clean, it's auditable, and it's easy to update as you add or remove devices.
I want to emphasize something Herman said earlier that might have gone by quickly. The default ACL in Tailscale is permissive — every device can talk to every other device. For a home user with five or six devices they trust, that's probably fine. But if you're the kind of person who's worried about exposure, changing that default to a deny-all and then explicitly allowing what you need is a fifteen-minute exercise that gives you a lot of peace of mind.
Tailscale has a feature called ACL tests that lets you validate your rules before applying them. You can write a test that says, device A should be able to reach device B on port four four three, device C should not be able to reach device B at all. Run the test, see if it passes, then apply. It's a safety net.
Let's pull this together for Daniel's specific questions, because that's what he asked for. Exit nodes: safe, just follow the three-step process, use a dedicated always-on device, start with restrictive ACLs. Hair pinning: not a problem, Tailscale establishes direct peer-to-peer connections on local networks, no traffic leaves your house. Custom DNS: fully possible within Tailscale for internal-only access using split DNS and an internal DNS server. For public access, you'll want Cloudflare or Tailscale Funnel, and split DNS can give tailnet members a direct local path even when the public DNS points elsewhere.
The through-line in all of this is that Tailscale changes the security model from network-level to identity-level. You stop thinking about IP addresses and ports and start thinking about devices and what they're allowed to do. It's a mental shift, and it takes time, but once you make it, a lot of the anxiety Daniel described just goes away.
I think there's one more thing worth saying about the exit node use case, because Daniel framed it as kind of wild — the idea that your web traffic is flowing through your house from the other side of the world. And it is wild. But it's also increasingly normal. The boundary between local and remote has been dissolving for years. We've got people working from home accessing office resources, people traveling accessing home resources, services distributed across multiple physical locations. The idea that your network is a physical place is increasingly a fiction.
That's exactly what a software-defined network like Tailscale provides. Your network isn't a physical location. It's a set of devices that trust each other, regardless of where they physically are. The exit node is just the most dramatic illustration of that principle. Your laptop in Tokyo and your server in Jerusalem are on the same logical network. Of course your traffic can flow through your house. They're in the same network.
Which is why, to Daniel's point about deployment workflows, using the tailnet address as the canonical address for services makes so much sense. The physical location stops mattering. You don't need conditional logic. You don't need to know where you are. You just connect to the tailnet address, and the network figures out the best path.
I want to add a practical tip about exit nodes that I haven't seen documented well. If you're using an exit node for geo-restricted content or IP-based authentication, be aware that some services do additional checks beyond your IP address. They might look at your browser's language settings, your timezone, your GPS if you're on mobile. An exit node alone won't spoof all of those. So if you're trying to access a service that's really determined to know where you are, you might need to think about those additional signals.
That's a good catch. The IP address is the biggest signal, but it's not the only one. On a phone, for example, many apps request location permissions and use that for geo-verification regardless of your IP. On a laptop, browser fingerprinting can reveal your timezone. If you're routing through Jerusalem but your system clock says Tokyo, that's a mismatch that some services will flag.
It's not a security problem, it's just a practical limitation. The exit node does exactly what it says — it routes your traffic. It doesn't virtualize your entire device environment. For most use cases — accessing your bank, streaming content, bypassing IP restrictions — it works fine. But if you run into something that still blocks you, check those other signals.
Now: Hilbert's daily fun fact.
Hilbert: In eighteenth-century England, a profession known as a pure finder involved collecting dog excrement from the streets and selling it to tanneries, where it was used in the leather-making process to soften hides. The unintended consequence was that tanneries clustered in neighborhoods where pure finders operated, creating some of history's first industrial zoning patterns driven entirely by the availability of dog waste.
Hilbert: In eighteenth-century England, a profession known as a pure finder involved collecting dog excrement from the streets and selling it to tanneries, where it was used in the leather-making process to soften hides. The unintended consequence was that tanneries clustered in neighborhoods where pure finders operated, creating some of history's first industrial zoning patterns driven entirely by the availability of dog waste.
a supply chain I had not considered.
I need to sit with that one for a while.
To close out, I think the forward-looking question here is about how much further this abstraction can go. We've gone from physical networks to software-defined networks. We've gone from IP-based security to identity-based security. The next step, which is already happening, is that the network itself becomes invisible. You don't think about it at all. You just access your services, and the underlying mesh handles routing, encryption, authentication, and optimization. Tailscale is pushing hard in that direction with things like Tailscale Funnel and their Kubernetes operator.
I think the open question is whether that level of abstraction is actually desirable for home users. There's a point where the network becomes so invisible that you lose the ability to debug it when something goes wrong. If your connection to your Home Assistant is slow, is it a Tailscale issue, a WiFi issue, an ISP issue, a server load issue? The more layers of abstraction, the harder it is to know where to look. That's not an argument against Tailscale — it's an argument for understanding enough about how it works to troubleshoot effectively.
Which is, fittingly, exactly what Daniel is doing by asking these questions. He's not just using the tools. He's trying to understand the routing decisions, the DNS resolution, the security boundaries. That's the right posture. Use the abstraction, but know what's underneath it.
Thanks to Hilbert Flumingtop for producing. This has been My Weird Prompts. Find us at myweirdprompts dot com or wherever you get your podcasts.
Until next time.
This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.