Imagine you are finally on that vacation you have been planning for six months. You are sitting on a beach, the sun is hitting just right, you have a cold drink in your hand, and suddenly your phone buzzes. It is a notification from your server. Or worse, you try to check your home security cameras and the app just spins. You try to log into your fancy monitoring dashboard, the one with the thirty different dials and the glowing neon green graphs, but it won't load. Why? Because the dashboard runs on the very server that just went dark. Now you are thousands of miles away, staring at a digital brick, and you have no idea if it is a blown power supply, a kernel panic, or if your cat just tripped over the ethernet cable.
That is the ultimate home lab nightmare, Corn. It is the "black box" failure. And honestly, it is a scenario that more people are facing now in twenty twenty-six than ever before. We have seen this massive explosion in the self-hosting community. People are buying these incredibly powerful little mini PCs, they are setting up massive NAS builds with affordable high-capacity drives, and they are trying to de-cloud their lives. But the one thing they often forget is that when you become your own IT department, you also become the guy who has to fix the server at three in the morning from a hotel room in another country. By the way, listeners, today's episode is powered by Google Gemini three Flash, which is helping us navigate this world of "good enough" resilience. I'm Herman Poppleberry, and we are diving deep into the art of not over-engineering your peace of mind.
Today's prompt from Daniel is about exactly that. He's asking about resource monitoring for the "over-teched" home-labber or the small business owner who doesn't want to spend their entire weekend babysitting a Prometheus stack. Daniel wants to know: what are the "good enough" ways to know things are running, and more importantly, how do you get back in when the front door is locked and the windows are barred?
I love this framing because it touches on the psychological side of tech. We often build these systems because we want control, but then the system itself becomes a source of anxiety. If your monitoring system is so complex that it requires its own monitoring system, you've moved away from "control" and into "unpaid internship for your own hardware."
Right, I don't want a NASA mission control center in my closet if I'm just trying to host some family photos and a VPN. I want to sleep soundly. So, let's start with the "Good Enough" monitoring stack. Most people hear "monitoring" and they think Grafana. They think beautiful, complex dashboards that look like something out of a sci-fi movie. But if the goal is just knowing "is it alive?", isn't that a bit much?
It's totally a bit much for most people. Look, Grafana and Prometheus are incredible. If you are running a cluster of a hundred nodes for a mid-sized startup, you need that level of granularity. You need to know the exact millisecond of latency on your database queries. But for a home lab or a small office? You really only need two things: a way to know if it's dead, and a way to know why it died. The most common mistake is hosting your monitoring on the same hardware you're monitoring. It's like putting the fire alarm inside the safe. If the safe catches fire, you'll never hear the alarm.
That’s a perfect analogy. It’s the "circular dependency" problem. If the server goes down, the thing that tells me the server is down is also down. So I’m just sitting there in blissful ignorance while my data is cooking. But wait—if I don't use the big enterprise tools, how do I actually get a reliable signal? If I’m not doing the whole enterprise stack, how am I getting that "is it alive" signal without spending four hours a week configuring YAML files?
You start with the Heartbeat Check. This is the simplest, most robust mechanism there is. In the industry, we sometimes call it the "Dead Man's Snitch." Instead of a monitor outside trying to poke your server to see if it's awake—which can be blocked by firewalls or ISP issues—your server sends a "ping" out to a third-party service every few minutes. It basically says, "I'm still here, everything is fine." If that service doesn't hear from your server within the expected window, it sends you an alert.
That's clever. It's an inverted logic. So if the power goes out, the server stops talking, and the external service realizes the silence is the problem. What are people using for this in twenty twenty-six?
Healthchecks dot io is the gold standard for this. It is incredibly simple. You get a unique URL, you set up a tiny cron job or a systemd timer on your server that just runs a "curl" command to that URL every five minutes. If Healthchecks doesn't get that "curl" hit, it pings your phone. There's also Uptime Kuma, which has become a massive favorite in the self-hosting world. You can host Uptime Kuma yourself, but the pro move is to host it on a completely different piece of infrastructure. Maybe a five-dollar-a-month VPS or even a Raspberry Pi at a friend's house.
But wait, if I put Uptime Kuma on a Raspberry Pi at my friend's house, aren't I just creating another thing I have to manage? Now I'm worried about my friend's internet connection. If his cat pulls the plug, I get a notification saying my server is down, even if it’s fine. How do we avoid that kind of false positive?
That’s a valid concern. That’s why the VPS—the Virtual Private Server—is usually the "adult" choice. You spend sixty dollars a year for a tiny slice of a server in a professional data center with redundant power and fiber. It’s significantly more reliable than your home or your friend's house. If the VPS says your house is down, your house is almost certainly down. It becomes your "Source of Truth" because its uptime is guaranteed at 99.9%.
I like the VPS idea because it's truly external. If my home internet goes down, the VPS is still up and can tell me, "Hey, I can't see your house anymore." But what about the "Can I Reach My Service" check? Because sometimes the server is "up" in the sense that it has power, but the actual website or service I need is broken. For example, what if the Docker daemon crashes but the Linux kernel is still humming along?
That is Mechanism Number Two. This is where you monitor external accessibility. You want a script or a service on that external VPS that doesn't just check for a heartbeat, but actually tries to load your specific service. It tries to fetch your Home Assistant login page or your Nextcloud dashboard. This catches the sneaky failures—like when your reverse proxy crashes, or your Dynamic DNS didn't update, or your ISP changed your IP and your port forwarding broke.
I've had that happen. The server is humming away, totally happy, but the "bridge" to the outside world is gone. It's like the house is fine, but the road leading to it washed away. How do you differentiate between a local network failure and a service failure in that scenario?
You look at the "stack" of alerts. If your Heartbeat check is still hitting Healthchecks dot io, but your Uptime Kuma check on the VPS says the website is down, you know the server has internet access but the web service is borked. If both fail, you know the whole house is likely offline. It’s a process of elimination that takes ten seconds instead of two hours of troubleshooting.
That makes total sense. And that leads to Mechanism Number Three, which is the leading indicator of failure: the "Is My Disk Full" check. This is the silent killer of home labs. You are running a media server, or you're doing automated backups, and suddenly your root partition hits one hundred percent. Everything starts crashing. Databases get corrupted. Logs can't be written. It's a mess. A simple script that checks disk usage and sends a webhook notification to Discord or Slack when a drive hits eighty-five percent is worth its weight in gold.
I can't tell you how many times a simple log file has ballooned and killed a production database because someone forgot to rotate the logs. In a home lab, it’s usually a download folder or a backup script that went haywire. If you get a notification at eighty percent, you have days to fix it. If you wait until one hundred percent, you're looking at a Sunday afternoon of command-line surgery.
But how do we actually get those notifications out? If the disk is full, often the system can't even spawn a new process to send the alert. It’s a bit of a catch-22, isn't it?
That’s why you set the threshold at eighty or eighty-five percent. You want the alert to fire while there is still "breathing room" for the OS to function. If you wait until ninety-nine percent, the notification might never leave the box. And here is a pro-tip: use an external service like Gotify or Pushover. These are lightweight notification servers. You send a simple HTTP request, and it pops up on your phone. You don't want to rely on your server's ability to send an email, because setting up a mail transfer agent is a nightmare that belongs in the nineteen-nineties.
It sounds like the "good enough" approach is really about identifying the three or four things that actually cause ninety percent of the downtime. It's Pareto's principle for nerds. But let's look at a case study. Say I have a Proxmox host. It's running my whole life. Suddenly, the kernel hangs. The network interface is technically "active" at the hardware level, but the OS is frozen. What does my monitoring show me?
If you are only doing a "ping" check to the IP address, your monitor might stay green because the network card is still responding to those low-level ICMP packets. But your HTTP check to your dashboard will fail immediately. That is why you need that "service level" check. It tells you that the "lights are on but nobody is home." And once you get that alert, that is when the real stress starts. Because now you know it's broken, but you are in a hotel room in Tokyo and your server is in a closet in Chicago.
This is the part of the prompt that really interests me—the Resilient Re-Entry Problem. You know it's broken. You try to SSH in, and... nothing. Connection refused. Or worse, connection timed out. Now what? You can't exactly fly home just to push a power button. And you can’t ask your non-technical roommate to start typing commands into a black screen.
This is where we talk about the "Oh Crap" button. In the enterprise world, we have IPMI or iDRAC. It's a dedicated little computer inside your server that has its own network port and its own power supply. It lets you see the screen, move the mouse, and even toggle the power remotely, even if the main OS is completely dead. But most home lab gear—like those cheap N100 mini PCs or old desktops—doesn't have that.
So what is the "good enough" version for a guy with a three-hundred-dollar mini PC?
The NanoKVM. This has been a game-changer over the last year or two. There are these tiny devices, some based on RISC-V or Raspberry Pi tech like the PiKVM or the new NanoKVM hardware, that cost anywhere from thirty to a hundred dollars. You plug it into the HDMI port and the USB port of your server. It basically pretends to be a monitor and a keyboard. It has its own little web server. So you log into the NanoKVM's web interface, and you are literally looking at the BIOS of your server. You can see the kernel panic. You can reboot it. You can even "insert" a virtual USB drive to reinstall the operating system from three thousand miles away.
That is incredible. I remember when getting that kind of "out-of-band" management meant buying a two-thousand-dollar enterprise server that sounded like a jet engine. Now you're telling me I can get that for the price of a nice dinner? How does the KVM itself stay online if the server it's attached to is drawing too much power or causing a local short?
That’s a key detail. You power the NanoKVM from a separate power brick, ideally on a different circuit or at least a different surge protector than the server. If the server's power supply explodes, the KVM stays up. It’s the "black box" on an airplane. It survives the crash so it can tell you what happened. You can even use it to enter the BIOS and change boot orders. It’s like having your hands and eyes physically in the room.
But wait, how do you actually see the KVM interface if your main router is having a seizure? If I'm away from home, I usually connect through a VPN on my router. If the router is down, the KVM is a brick too, right?
This is the "Inception" of networking. You need a way to reach the KVM that doesn't depend on the primary network path. Some people use a secondary, cheap travel router that's bridged to a neighbor's Wi-Fi—with permission, of course—or they use a cellular failover. But for most of us, the "good enough" solution is to make sure the KVM is running its own Tailscale client.
Okay, so the NanoKVM is the "physical" lifeline. But how do I even reach the NanoKVM if my home network is acting up? This brings us to the "Path Back In." Daniel mentioned SSH bastions and tools like Tailscale and Cloudflare Access. How do these fit together? Because if my router is the thing that's failing, neither a KVM nor a heartbeat is going to save my ability to connect.
Think of it in layers. Your first layer of defense is usually an overlay network like Tailscale. I cannot rave about Tailscale enough. It uses WireGuard to create a secure, flat mesh network between all your devices. It doesn't matter if you are behind a restrictive firewall or a double-NAT from a cellular provider. Tailscale just finds a way. If you have Tailscale installed on your phone and on your server, they can talk to each other as if they were on the same Wi-Fi. It is "magic" in the best way possible.
I've used Tailscale, and it really does feel like cheating. No port forwarding, no messing with router settings. But what if the Tailscale daemon on my server crashes? Or what if my main internet line is literally cut by a construction crew down the street? Does Tailscale have a "backup" mode?
Tailscale itself doesn't fix a cut fiber line, but it can run on multiple "exit nodes." That is where the Redundant Connection comes in. This is the "crazy person" level of resilience that Daniel mentioned, but it's becoming very practical. A lot of modern prosumer routers now support a USB LTE or 5G modem. You can get a cheap prepaid SIM card, plug it into your router, and set it to "failover" mode. If your fiber or cable line goes dark, the router automatically switches to the cellular network. It might be slow, but it's enough to let you SSH in or access your NanoKVM to see what's going on.
And if you combine that with a Bastion Host?
Right. An SSH Bastion, or "jump box," is a hardened, minimal server that sits on the edge of your network. Ideally, this is something like a tiny VPS in the cloud. You SSH into the VPS, and from there, you have a secure tunnel back into your home network. It creates a single, highly-audited entry point. Instead of having five different ports open on your home router for five different services—which is a huge security risk—you have zero ports open, and you use the VPS as a relay.
We actually talked about this concept in a very specific context way back in episode sixteen fifty-six, looking at how to use a VPS relay for GPS trackers. It's the same principle here. You're using an external "fixed point" in the digital universe to bridge the gap to your home lab, which might be moving around or behind all sorts of networking barriers.
It’s a very robust way to live. And if you want to get really fancy, you use Cloudflare Access. This is part of their Zero Trust suite. Imagine you want to access your Home Assistant dashboard. Normally, you'd open a port and hope your password is strong. With Cloudflare Access, you put a "tunnel" between your server and Cloudflare. When you try to go to your URL, Cloudflare intercepts the request and asks you to log in with your Google or GitHub account first. Your server isn't even "visible" to the public internet until you've passed that first layer of authentication.
It's like having a bouncer at the end of the driveway before anyone can even see the front door. I love that because it removes the "anxiety of the open port." I think that's a huge part of why people are hesitant to self-host. They're afraid of being hacked. But what happens if Cloudflare itself has an outage? We’ve seen that happen. Does the "good enough" stack fall apart?
That is why you have Tailscale as the "side door." Cloudflare is the "front door" for your daily convenience. Tailscale is the "secret tunnel" for when the front door is stuck. If Cloudflare goes down, you toggle Tailscale on your phone, and you go straight to the internal IP address. You are never relying on just one path. That is the core of the "good enough" philosophy—redundancy without complexity.
I have to ask, though—what about the hardware side of the "side door"? If my router hardware itself dies, even Tailscale can't save me because there's no gateway to the internet. Do you recommend having a second router just sitting there?
That’s where the "Cold Spare" strategy comes in. You don't necessarily need a second router plugged in and burning electricity. But having a pre-configured, identical router in a box on the shelf? That’s a pro move. If you are away, you can't swap it, but if you're home, it turns a three-day outage waiting for a replacement into a five-minute swap. For remote scenarios, the only real answer is that cellular failover we talked about, but connected to a secondary, smaller router that only handles the management traffic.
And when you combine these "good enough" tools—Uptime Kuma for monitoring, a NanoKVM for emergency hardware access, Tailscale for daily use, and a 5G failover for when the backhoe hits the fiber—you aren't just a guy with a hobby in a closet. You're running a more resilient infrastructure than many small businesses were ten years ago.
It's true. Ten years ago, if you wanted 5G failover, you were buying a thousand-dollar Cradlepoint router and a corporate data plan. Now, you can do it with a thirty-dollar GL-iNet travel router and a "pay as you go" SIM. The democratization of this high-availability hardware is what makes the twenty twenty-six home lab so much more powerful than the home labs of the twenty-tens.
So, let's break this down into some actual, actionable steps for the listeners. Because we've covered a lot of ground. If I'm starting from scratch, or if I have a "messy" home lab and I want to get to that "sleeping soundly on vacation" state, what is Step One?
Step One is "Separate your monitoring." Do not host your uptime checks on the machine you are checking. Go get a free account at Healthchecks dot io or set up a tiny VPS. Get that "Heartbeat" established. If you do nothing else, just having a notification that says "Your server stopped talking to the internet" is eighty percent of the battle. It eliminates the "uncertainty." You won't find out your server is down three days later when you finally try to use it.
Step Two?
Step Two is the "Insurance Policy." If you have a primary server that is critical to your life or business, buy a NanoKVM or a PiKVM. Yes, it feels expensive to spend fifty or eighty dollars on a little box that just sits there doing nothing. But the first time your server hangs while you are away, and you can just pull up a browser and hit "Reset" or see the error message on the screen, it will pay for itself ten times over in saved stress.
I'd add a Step Three here: "Simplify your entry path." If you are still using port forwarding and Dynamic DNS, stop. It's twenty twenty-six. Install Tailscale or set up a Cloudflare Tunnel. It's more secure, it's more reliable, and it works through almost any network environment. It takes ten minutes to set up and it removes an entire category of "why can't I connect" problems.
I love that. And Step Four is the "Emergency Exit." If you really are a "crazy person" or you're running a small business, look into a cheap cellular failover. A lot of people don't realize their phone's hotspot can actually be bridged to their home network in an emergency. There are these great little GL dot iNet travel routers that can "repeater" off a phone and provide internet to your whole rack if your main ISP dies.
It’s that "layered" approach. You don't need a single perfect solution; you need three "good enough" solutions that fail in different ways. If the software fails, use the KVM. If the internet fails, use the 5G. If the hardware fails, well, at least you have a notification telling you it's dead so you can stop wondering why your apps aren't working.
And that is the "Resilience Mindset." It's moving away from "Maximum Data"—tracking every single CPU temperature and fan speed—toward "Maximum Availability." I don't care if my CPU is at forty degrees or fifty degrees while I'm on a beach. I only care if the heartbeat stops.
It's focusing on the outcomes rather than the metrics. I think we often get lost in the metrics because they're fun to look at. We like the glowing graphs. But the graphs are a distraction if they don't help you actually solve the problem of "is my data safe and can I get to it?" I’ve seen people spend weeks setting up dashboards for their hard drive temperatures, but they don't have a single off-site backup. That’s just misplaced energy.
Right. We've talked about hardware failure before, specifically in episode four ninety-three where we looked at predicting hardware failure. But that was about avoiding the crash. Today's discussion is about what happens after the crash. Because no matter how much you predict, things will eventually break. A power surge, a bug in an update, a literal bug crawling into a power supply—it happens.
I'm curious about your take on the "small business" side of this, Herman. Because a lot of what we're talking about is very "home lab" coded. But if I'm a small law firm or a dentist's office, can I really rely on a NanoKVM and Tailscale? Or is that "unprofessional"?
Honestly, for many small businesses, this is better than what they currently have. Most small businesses have a server in a back room and they rely on a local IT guy who comes out every two weeks. If that server goes down on a Friday night, they are dead until Monday. If they had a NanoKVM, their IT guy could fix it from his house in ten minutes. And Tailscale is actually incredibly "enterprise-ready." They have amazing access control lists and security features. It's far more secure than the "old way" of using a clunky, unpatched VPN on a consumer router.
But what about compliance? If I'm a dentist, I have HIPAA to worry about. Does using a "good enough" DIY stack like Tailscale and a NanoKVM open me up to legal trouble? Or is it actually more compliant because it's encrypted and audited?
It’s actually much easier to secure. Legacy VPNs often have massive vulnerabilities because they aren't updated. Tailscale is based on WireGuard, which is the modern standard for security. As long as you have logging enabled and you use strong multi-factor authentication on your Tailscale account, you are in a much better position than someone using an old Cisco box from 2014. The "professional" move isn't about how much the hardware costs; it's about how well the security model is implemented.
So it's not just "good enough" for hobbyists; it's actually "better than the legacy way" for everyone. Is there any specific hardware you’d recommend for a small business that wants to bridge that gap?
I’d look at the "TinyMiniMicro" nodes—the refurbished Lenovo ThinkCentres or HP EliteDesks. They are built like tanks, they are cheap, and they have very reliable power supplies. Pair one of those with a NanoKVM and a Tailscale subnet router, and you have a setup that is more reliable than most "server" hardware from five years ago.
I think there's also something to be said for the "peace of mind" of knowing exactly what your recovery path is. I've been in that situation where something breaks, and my brain starts racing: "Okay, did I enable SSH? Is the firewall blocking it? Is the router down? Did the whole house lose power?" If you have these "good enough" tools, you just go down the checklist. Check Healthchecks—is the heartbeat there? No. Check the 5G backup—can I reach the router? Yes. Okay, it's a fiber outage. Or, check the NanoKVM—can I see the screen? Yes, it's a kernel panic. It turns a "crisis" into a "task."
It's about reducing the "cognitive load" of a failure. When things are breaking, you are already stressed. You don't want to be playing detective. You want a dashboard—a simple one—that tells you exactly where the chain is broken.
Let’s talk about a tangent for a second. Have you ever seen those "watchdog" timers? I’ve seen some people use them as the ultimate "good enough" reset button. It’s a piece of hardware that just waits for the OS to say "hello," and if it doesn't hear it for sixty seconds, it physically cuts the power and restarts it. Is that too aggressive?
It’s a bit of a blunt instrument. The problem with a hardware watchdog is that it can cause file system corruption if it reboots while a database is mid-write. I prefer the "manual" remote reset through a KVM or a smart plug, because it gives you a chance to see the error message first. If you just auto-reboot, you might be masking a failing drive or a overheating CPU until it finally dies for good.
If the server reboots and comes back up, you might think "oh, it was just a glitch," and then you go back to your vacation. But if it was actually a warning sign of a dying power supply, you're just kicking the can down the road. You want to see the "why" before you hit the "reset."
And that's why the NanoKVM is superior to a simple smart plug. A smart plug is a blind reset. A KVM is an informed intervention. You can see if it’s a "kernel panic - not syncing" or if it’s just a hung service. You can even drop into a recovery shell and fix a broken config file without ever needing to cycle the power.
So, looking forward, where do you see this going? As we move further into twenty twenty-six and beyond, do you think these "out-of-band" tools will just become standard? Will my next mini PC come with a NanoKVM built into the motherboard?
We're already seeing hints of that. Some of the higher-end mini PC manufacturers are starting to include basic remote management features. But honestly, the "external" box will always have a place because it is truly independent. If the motherboard fails, an "on-board" management chip might fail too. An external KVM that draws its own power? That is the ultimate survivor.
It’s the "external observer" problem. You can't truly observe a system from within the system. You need that outside perspective.
And I think we'll see more "intelligence" in the heartbeat checks. Not just "am I alive," but "am I behaving correctly?" We're starting to see tools that can detect "silent" failures—like a database that is responding but returning empty results, or a drive that is technically "up" but showing massive latency. But again, for the average person, "is it up and can I get in" is ninety-nine percent of the value.
Well, I feel a lot better about my closet full of blinking lights now. It's not about having the most complex setup; it's about having the most resilient one. And sometimes, the most resilient thing is a simple cron job and a thirty-dollar Risc-V box.
It really is. It’s about being "plain crazy" in a smart way. If you are going to be "over-teched," at least be over-teched in the direction of reliability.
I love that. "Over-teched for reliability." That should be the motto of the home lab community.
It's better than my current motto, which is "Let's see what happens if I run this command as sudo."
Yeah, that usually leads to needing the NanoKVM.
Every single time. But hey, that's how we learn, right? You haven't really lived until you've accidentally deleted your own network configuration while connected via SSH.
I think every person listening to this has had that "sinking feeling" where you press Enter and the cursor just... hangs. And you realize you just deleted the firewall rule that allows your current connection.
Oh, the "Self-Lockout." It’s a rite of passage. I once did that to a server in a different state. I had to call a guy in the data center and describe over the phone which cable to pull. It was humiliating. If I’d had a NanoKVM back then, I could have fixed my mistake in thirty seconds and nobody would have known I was an idiot.
And then you're sitting there, staring at a terminal that isn't responding, knowing that you just cut the branch you were sitting on. It’s a very specific kind of silence. The silence of a broken connection.
That "hollow" feeling in your stomach when the cursor stops blinking. That is the moment the NanoKVM was built for. It's the "undo" button for hardware.
Well, I think we've given people a lot to chew on. This has been a really practical dive into something that could have been very dry, but actually touches on the core of why we do this. We want to own our tech, but we don't want our tech to own our time.
Or, as I should say... precisely. Wait, no, I'm not allowed to say that. Let's just say, you hit the nail on the head, Corn. It's about shifting the balance of power back to the human.
Before we wrap up, I want to leave the listeners with a question: What is the most creative "good enough" monitoring or access solution you have seen in the wild? I've heard of people using smart plugs to hard-reboot their routers, or even using a physical "robot finger" to push a power button. There are some wild DIY solutions out there.
Oh, the smart plug trick is a classic. Just make sure the smart plug isn't controlled by the same Wi-Fi that the router provides, or you'll have a very bad time. You'll turn it off and then realize you can't turn it back on. I actually saw a guy use an old Android phone with a cracked screen as a dedicated "failover" modem. He just left it plugged into the router via USB tethering 24/7. It was ugly, it was janky, but it worked perfectly when his fiber went down.
I've done that. It's a very quiet moment of self-reflection. You’re looking at this mess of wires and thinking, "I am a genius," but also, "I hope no one ever sees this."
"I have outsmarted myself."
Truly. One last thing—do you think there's a limit to "good enough"? Like, at what point does a home lab actually need the enterprise stuff? Is it when you start charging people for access?
That’s exactly the line. The moment someone else is paying you for uptime, "good enough" isn't good enough anymore. If you're running a business where every minute of downtime costs you a hundred dollars in lost sales, then you pay for the redundant power supplies, the enterprise support contracts, and the geographically redundant data centers. But for 95% of the people listening to this, a VPS and a KVM is a more than adequate shield against the chaos of the universe.
Well, this has been a great one. Thanks to Daniel for the prompt—he always knows how to get us into the weeds of practical tech.
And thank you to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes. Big thanks also to Modal for providing the GPU credits that power this show. It’s amazing what we can do with these tools in twenty twenty-six.
This has been My Weird Prompts. If you enjoyed the show, we are on Spotify, so hit that follow button if you haven't already. It really helps us out.
You can also find us at myweirdprompts dot com for our full archive and RSS feed. We'll see you in the next one.
Stay resilient, everyone.
And keep those heartbeats coming. Goodbye.