Daniel sent us this one — and I have to say, this is a question that's been hiding in plain sight for a lot of developers. He's been building on Vercel and serverless, came from WordPress, went through the whole hair-pulling deployment phase that everyone goes through, and then he noticed something strange. He pushed a change, reloaded his production page, and it just appeared. From a database. On what he thought was a static serverless site. His question is basically, what just happened? And more specifically, what are the two fundamentally different methods for getting data from a backend to the frontend in a serverless environment? Because clearly there's more than one way this works, and he's actually come to prefer the slower one.
I love that he noticed this organically, because most people don't. They just accept that their site works and move on. But the moment you see content appear instantly on production and think, wait, I thought this was all pre-built static files — that's when you've actually understood the architecture enough to be confused by it. Which is the best kind of confusion.
It's the confusion that means you're paying attention. And by the way, today's episode is powered by DeepSeek V four Pro writing our script. So if anything sounds unusually coherent, that's why.
Alright, so let's name the two methods Daniel is dancing around, because they have actual names and they represent fundamentally different philosophies about how websites should work. The first one — the "slow" one he says he's come to prefer — is called static site generation, or S S G. The second one, the one that surprised him when content appeared instantly, is server-side rendering, or S S R. And there's a third hybrid that Vercel popularized called incremental static regeneration, I S R, which is probably what he actually experienced.
Three methods, not two. Daniel's going to be annoyed.
I S R is the bridge between the two, and it's what makes the whole thing interesting. But let's start with the pure versions. Static site generation is exactly what it sounds like. You have content — blog posts, product pages, whatever — and at build time, your framework generates every possible H T M L page as a flat file. Those files sit on a C D N somewhere, and when a user requests a page, the server just hands them the file. No database query, no server-side processing. It's the fastest possible way to serve a webpage.
This is what he meant by "a static slab of files onto the server." You run a build command, your framework pulls all the content from whatever headless C M S or database you're using, renders every page, and deploys the whole thing as finished H T M L. The server is basically a very fast file cabinet. And the security model Daniel mentioned — that's a real thing. When you serve static files, there's no server-side code executing on each request. No P H P interpreter, no database connection string exposed, no S Q L injection surface. The attack surface shrinks to basically the web server itself and whatever third-party JavaScript you've loaded on the client.
The trade-off is obvious. If you want to update a blog post or fix a typo, you have to rebuild the entire site. For a site with ten pages, that takes seconds. For a site with ten thousand pages, that's a problem. And if you're running an e-commerce store where inventory changes by the minute, static generation is basically useless. That's where server-side rendering comes in. With S S R, when a user requests a page, the server runs your application code in real time, queries the database, renders the H T M L, and sends it back. Every request triggers a fresh render. So if you update your inventory count, the next person who loads the product page sees the new number immediately. No rebuild required.
This is the WordPress model, essentially. Traditional WordPress is server-side rendered P H P. You hit publish, the database is updated, and every subsequent request pulls the latest content. That's why Daniel had that moment of recognition — he saw something that felt like WordPress but was running on serverless infrastructure, and his brain correctly flagged it as interesting.
Here's where the terminology gets sloppy. When people say "serverless," they usually mean "I'm not managing a server." But the computation still happens somewhere. Vercel and Netlify run serverless functions — little bits of code that execute on demand, do some work, and then shut down. Those functions can do server-side rendering. So you can have a site that feels static but is actually rendering pages on the fly through serverless functions hitting a database. That's the magic Daniel stumbled into.
Let's make this concrete. Daniel's using Astro, or possibly Next, on Vercel. He pushes a change. The build runs, but not everything is pre-rendered. Some pages are marked as server-rendered. When he reloads his browser, the request hits Vercel's edge network, which runs a serverless function, which queries his database — probably something like Neon, which is serverless Postgres — and renders the page with fresh data. The whole thing takes a few hundred milliseconds, and to Daniel it just looks like the content appeared.
This is where Neon is genuinely interesting. Traditional Postgres expects persistent connections. Serverless functions are ephemeral — they spin up, run for maybe a second, and disappear. You can't have a thousand serverless functions each holding open a database connection. Neon solves this by putting a connection pooler in front of Postgres designed specifically for serverless workloads. It's not just a convenience — it's an architectural necessity if you want server-side rendering with a relational database in a serverless environment.
The stack Daniel's working with is probably something like: Astro or Next as the framework, Vercel as the deployment platform, Neon as the database, and some headless C M S — maybe Strapi or Sanity or Contentful — as the content layer. The framework decides at the page level whether to pre-render at build time or render on demand.
That decision is the whole game. Let's talk about the trade-offs, because Daniel said he's come to prefer the slower method, and I want to unpack why that's not actually counter-intuitive.
Before we do, let's name the third method properly. Incremental static regeneration — I S R — is the compromise. You pre-render your pages at build time, just like S S G. But you also set a revalidation interval — say, sixty seconds or ten minutes. When a request comes in after that interval has passed, the server serves the stale cached page immediately, but in the background it triggers a rebuild of that specific page. The next request gets the fresh version. Only the pages that actually get traffic are re-rendered.
This is almost certainly what Daniel experienced. He made a change, the I S R revalidation kicked in, and the page refreshed with new content. It felt like S S R, but underneath it was still serving static files most of the time. Vercel has been pushing this model hard because it gives you the speed of static with the freshness of server-rendered, and you only pay for compute when pages actually need rebuilding.
Let's get into the trade-offs. Daniel said he prefers the slower method — meaning static generation with a deliberate build step. Why would anyone prefer waiting?
I think there are at least three good reasons, and they're not obvious until you've been burned by the alternatives. When you do a full static build, you know exactly what's going to be served. You can run your test suite against the built output. If something breaks, it broke at build time, and you can fix it before anything hits production. With S S R or I S R, you're essentially running code in production on every request, and errors can be intermittent, data-dependent, and much harder to catch.
The staging argument. Daniel mentioned he likes being able to stage his changes, see that they work, and then deploy. That's the static model. You build, you preview, you verify, you promote to production. The promotion itself is just pointing a domain at a different set of files. If something goes wrong, you roll back by pointing at the previous set of files. It's clean and reversible in a way that server-rendered rollbacks are not.
Second reason: cost predictability. Static files served from a C D N are basically free at any scale. Cloudflare's free tier will serve unlimited static assets. But serverless functions cost per invocation and per gigabyte-second of compute time. If your site goes viral, your static hosting bill doesn't change. Your serverless function bill could spike dramatically. For a personal blog, the difference might be negligible. For anything with real traffic, it adds up.
There was a story a couple years ago about a developer who accidentally racked up a thirty thousand dollar bill on Vercel because of a misconfigured serverless function that was getting invoked in a loop. Static sites don't have that failure mode. You can't accidentally infinite-loop your way into bankruptcy with flat files.
Third reason, and this is the one I think matters most: cognitive simplicity. A static site is just files. You can host them anywhere. You can move from Vercel to Netlify to Cloudflare Pages to a plain Nginx box in about five minutes. There's no vendor lock-in, no platform-specific function signatures, no middleware that only works on one provider. The portability is genuine in a way that server-rendered serverless is not.
That's the WordPress lesson in reverse, actually. One of the reasons people flee WordPress is that they feel locked into the P H P and MySQL stack. But serverless S S R can create its own form of lock-in. js middleware that works beautifully on Vercel might not work at all on Netlify, or it might work differently, or it might cost more. The surface area for platform-specific behavior is much larger when you're executing code at request time.
I should say — I'm not anti S S R. There are use cases where it's clearly the right call. Anything that needs real-time personalization — a dashboard showing live data, a social media feed, an e-commerce site where inventory actually matters — those can't reasonably be static. But for the kind of content site Daniel seems to be building, the static-plus-I S R model gives you almost all the benefits of server rendering with very few of the downsides.
Let's talk about the data flow, because that was the core of Daniel's question. How does data actually get from the database to the frontend in these different models?
In the static model, the data flow happens entirely at build time. Your build script connects to the C M S or database — usually through an API — pulls all the content, and generates H T M L. The database credentials never leave your build environment. The frontend never talks to the database directly. The only thing that reaches the user is finished H T M L, C S S, and JavaScript. It's a one-way pipe that gets sealed shut after the build completes.
That's the security model Daniel was talking about. Your database can be completely firewalled off from the public internet. Only the build server needs access. If someone compromises your frontend, they get static files. They can't pivot to your database because there's no connection string to steal, no API endpoint that proxies queries. The attack surface is the build pipeline itself, which is a much smaller target.
In the server-rendered model, the data flow is live. Every request triggers a connection — or at least a query — to your database. Your serverless function holds credentials. It constructs queries based on the incoming request. The frontend never sees the database directly — there's still a server in between — but the server is running on every request, and that means your database is effectively exposed to the internet through that serverless function.
This is where the security gets nuanced. A well-configured serverless function with parameterized queries, proper authentication, and rate limiting is perfectly secure. But you've now got more moving parts that need to be secured. The function's environment variables, the function's I A M role if you're on A W S, the database's connection pooler, the network path between the function and the database. Each of those is a potential misconfiguration. Static sites eliminate entire categories of those misconfigurations.
There's another data flow pattern that's become popular: client-side fetching. Instead of rendering on the server at all, you serve a mostly-empty H T M L shell and then have JavaScript in the browser fetch data from an API and render the page. The build step produces static files, but those files are just application shells. The actual content is loaded dynamically by the browser.
This is the single-page application model. And it's worth mentioning because it's the worst of both worlds for a lot of content sites. You get none of the S E O benefits of pre-rendered H T M L, you're dependent on client-side JavaScript for content to appear, and you're still running a live API that needs to be secured and scaled. But for authenticated dashboards or interactive tools, it makes perfect sense.
The S E O point is not trivial. Google does render JavaScript now, but it's slower, it consumes crawl budget, and other search engines and social media link previews often don't execute JavaScript at all. If you care about search traffic or social sharing, you want your content in the initial H T M L payload. That means either S S G, S S R, or I S R. Client-side fetching should be reserved for content that needs to be dynamic.
Let's circle back to Daniel's specific experience. He's using a coding agent — he mentioned Cloud Code — and he pushed a change that involved a database update. The page reloaded and the content was there. He asked the agent what happened, and it said something about "atomic building." What's actually going on under the hood?
"Atomic building" is probably a reference to how Vercel handles deployments. Every deployment gets a unique U R L, and the domain is pointed at the deployment atomically. There's no partial deployment state where half your files are new and half are old. But that doesn't explain the live data. The live data part is almost certainly I S R or on-demand revalidation.
On-demand revalidation is worth explaining. Normally, I S R revalidates on a timer — every sixty seconds, every ten minutes, whatever you set. But most frameworks now support an on-demand revalidation API. When your C M S updates content, it can send a webhook to your deployment platform saying "revalidate this specific page." The page gets rebuilt immediately, and the next request sees the new content. No timer involved.
That's the workflow Daniel described. He updates something in his database or C M S. A webhook fires. Vercel receives it, rebuilds the affected page, and invalidates the cache for that page. When he reloads his browser, he gets the freshly built version. It feels instantaneous, but it's still serving static files — just freshly regenerated static files. The build is targeted to a single page, not the entire site.
This is why I S R is such a good fit for content sites. You get the speed and security of static files for the vast majority of requests, but you also get near-instant updates when content changes. And you're not paying for server-side rendering on every request — you're only paying for the occasional rebuild of pages that actually changed.
Let me put some numbers on this. A static site on Vercel's Pro plan — twenty dollars a month — includes one terabyte of bandwidth and six thousand build minutes. For a content site doing fifty thousand page views a month, you might use a couple hundred build minutes and a few gigabytes of bandwidth. You're well within the limits. The same site doing server-side rendering on every request would need serverless function invocations for every page view. Fifty thousand page views at, say, three function invocations per page — that's a hundred fifty thousand invocations, still within most free tiers. But scale that to five million page views and you're looking at real money.
That's assuming your functions are efficient. If each page render takes five hundred milliseconds of compute time, you're also paying for gigabyte-seconds. The static site's cost curve is basically flat. The server-rendered site's cost curve slopes up with traffic. For most projects, the difference is theoretical. But if you're building something that might get a Hacker News front page, the static model is insurance against a surprise bill.
There's a broader point here about the pendulum swinging back toward static. For about five years, the industry narrative was that everything should be server-rendered, everything should be dynamic, JAMstack was dead, and S S R was the future. And then people started noticing the complexity and the costs and the debugging nightmares, and the pendulum started swinging back. What's actually happening is a more nuanced understanding that different pages on the same site should use different rendering strategies.
This is the real insight. The question isn't "should my site be static or server-rendered?" The question is "which pages should be static and which should be server-rendered?" Your about page and your blog posts can be statically generated. Your search results page probably needs to be server-rendered. Your product pages might use I S R with a short revalidation interval. Your shopping cart is client-side. The frameworks now support all of these on a per-page or per-route basis.
That's the thing Daniel's coding agent probably didn't explain well. When you're using Astro or Next or Nuxt, you're not picking one rendering strategy for the whole site. You're annotating individual pages or components with their rendering mode. In Next, you export a constant called "revalidate" to set the I S R interval. You use "getServerSideProps" for S S R. You don't export anything for pure static. The framework handles the rest.
Let's talk about the deployment pain Daniel mentioned, because that's the part of the experience that scares people away. He said it took him a very long time to come around to serverless, and I think his description of the debugging process is painfully accurate. You build locally, it works, you push to GitHub, the deployment fails because the Node version doesn't match, or a dependency is missing, or an environment variable isn't set, or the build script is different on the C I server than on your machine.
This is the "it works on my machine" problem, and serverless deployments make it worse because the build environment is a black box. On your local machine, you have control. You can see the file system. You can add console logs. On Vercel or Netlify, you push and you stare at a build log and hope it turns green. When it turns red, you get an error message that may or may not be helpful, and you start the cycle of commit, push, wait, fail, fix, commit, push, wait.
Daniel's point about coding agents being supreme for this is actually interesting. A coding agent that can see the build logs and iterate on fixes without you manually doing the commit-push-wait cycle — that changes the experience. The agent can catch the Node version mismatch, the missing environment variable, the undeclared dependency, and fix them in sequence. What used to be hours of frustrating debugging becomes a few minutes of the agent churning through errors.
The agent is only as good as its understanding of the platform. And this brings me to something I think is underappreciated: the build configuration is part of your application logic now. It's not just a convenience layer. The way you configure your framework's rendering strategy, your environment variables, your C D N headers, your redirects — those are all application-level decisions that affect security, performance, and user experience. They deserve the same care as your application code.
The "it's just a config file" mindset leads to sloppiness. I've seen projects where the entire security posture depends on a single line in a vercel.json or netlify.toml that nobody on the team fully understands. When something breaks, they don't even know where to look.
Let me give a concrete example. C O R S — cross-origin resource sharing. If you're serving your frontend from a static domain and your API from a serverless function domain, you need to configure C O R S correctly. Get it wrong, and your API either doesn't work or it works for everyone including malicious sites. The build configuration is where that lives.
Let's bring this back to Daniel's question about the two methods and why he prefers the slower one. I think what he's really saying is that he prefers a workflow where the build step is explicit, the output is inspectable, and the deployment is a deliberate act rather than a continuous trickle of changes. There's a philosophical preference for knowing exactly what's on production at any given moment.
That's not just a preference — it's a quality control strategy. When you have continuous deployment set up so that every push to main goes straight to production, you're betting that your test suite catches everything. Most test suites don't. The static build model gives you a natural gating point. You build, you look at the output, you verify, and then you deploy. The build step is your quality gate.
The counter-argument, of course, is speed. If you're a news site covering a breaking story, you can't wait for a full static rebuild. You need content on the page in seconds, not minutes. But most projects aren't news sites. Most projects can tolerate a two-minute build if it means catching a broken layout before it hits production.
This is where I S R with on-demand revalidation is such a good compromise. You get the quality gate of the initial build — you can preview and verify everything — and then subsequent content updates happen through targeted revalidation. You're not rebuilding the whole site when you fix a typo. You're rebuilding one page, and it happens in seconds.
Let's talk about the headless C M S piece, because Daniel mentioned it and it's central to how data flows in these architectures. A headless C M S is just a content database with an API. You write your posts in a web interface, it stores them, and it exposes them through a rest or GraphQL endpoint. Your static site generator pulls from that endpoint at build time. Your server-rendered pages query it at request time. The C M S doesn't care how you consume the content.
That decoupling is what Daniel was getting at when he talked about content living in different forms on different devices. Your blog post exists as structured data in the C M S. You can render it as H T M L for the web, as plain text for an email newsletter, as audio for a podcast, as a card in a mobile app. The content is separate from the presentation, and that's powerful. It's not just a different way to build websites — it's a different way to think about content.
The trade-off is that you now have two systems to manage instead of one. WordPress gives you content management and presentation in a single package. A headless C M S plus a static site generator gives you two systems with an API boundary between them. That boundary is powerful — it's what enables all the multi-platform flexibility — but it's also a source of complexity. When something breaks, you have to figure out whether the problem is in the C M S, the API, the build process, or the frontend.
That's the "technical mess" Daniel said he felt like he'd landed in. He's not wrong. It is more complex. But the complexity is manageable once you understand the pieces, and the benefits are real. The security model is better. The performance is better. The flexibility is better. The cost at scale is better. It's a higher upfront investment for a better long-term outcome.
Let's get specific about the security model, because I think it's the most underappreciated aspect of static architectures. A traditional WordPress site has a massive attack surface. The P H P runtime, the MySQL database, the WordPress core, the plugins, the themes — every one of those is a potential entry point. WordPress itself is reasonably secure, but the plugin ecosystem is a disaster. Every plugin you install is code running on your server with access to your database. A vulnerability in any plugin can compromise your entire site.
WordPress sites get hacked constantly. Not because WordPress is inherently insecure, but because the attack surface is enormous and the incentive to find vulnerabilities is high — it powers something like forty percent of the web. A static site eliminates almost all of that. There's no runtime to exploit. No database to S Q L inject. No admin panel to brute force. The worst an attacker can do is deface your static files, and if you've got proper access controls on your deployment pipeline, even that is extremely difficult.
The admin panel point is worth dwelling on. Every WordPress site has a login page at slash W P dash admin. It's the most attacked U R L on the internet. Bots constantly probe it with default credentials and known exploits. With a static site, your C M S admin panel can be on a completely different domain, behind a V P N, with no public access at all. The content editors log in to the C M S, make changes, and the changes flow to the public site through the build pipeline. The public site has no login page, no admin interface, no authentication system to attack.
This is the model that makes sense to Daniel. He said "something about that fundamentally secure security model makes sense to me." He's right. It's not just a feeling — it's a genuine reduction in attack surface that has real security implications.
Now let's talk about the other side of this, because I don't want to paint server-side rendering as insecure. It's not. A well-built server-rendered application with proper input validation, parameterized queries, C S R F protection, and a good web application firewall is perfectly secure. But you have to get all of those things right, and you have to keep getting them right as the application evolves. Static sites eliminate the need to get most of those things right in the first place.
The principle is "secure by default" versus "secure by configuration." Static is secure by default. Server-rendered is secure if you configure it correctly. Most security breaches happen because of misconfiguration, not because of fundamental flaws in the technology. Reducing the number of things that can be misconfigured is a genuine security win.
Let's address the question head-on. Daniel, you asked about the two different methods for getting data from a backend to a frontend in a serverless environment, and why you can see database changes appear in real time on what you thought was a static site. The two methods are static generation, where data is pulled at build time and baked into H T M L files, and server-side rendering, where data is pulled at request time and the page is rendered fresh for each visitor. What you actually experienced was almost certainly incremental static regeneration, which is a hybrid — pages are pre-built but can be individually re-rendered when content changes, either on a timer or through a webhook.
The reason you came to prefer the slower method — the explicit build step — is that it gives you a quality gate, cost predictability, platform portability, and a reduced attack surface. Those are real benefits. The trade-off is that content updates aren't instantaneous unless you add I S R on top. But for most content sites, a sixty-second revalidation window is perfectly acceptable, and the benefits of static generation far outweigh the minor delay.
One thing we haven't mentioned is that the frameworks have gotten much better at making this whole thing feel seamless. When Daniel first started, he was probably dealing with manual configuration and error messages that didn't make sense. Now, Vercel's integration with Next and Astro is polished. You connect your GitHub repo, you set a few environment variables, and the platform figures out the rest. The build logs are more readable. The error messages are more actionable. The whole experience has improved dramatically.
The database integrations are a big part of that. Vercel's partnership with Neon, which Daniel mentioned, means you can provision a serverless Postgres database from within the Vercel dashboard, get connection strings automatically injected as environment variables, and have your serverless functions query it without worrying about connection pooling. It's not just that the technology works — it's that the integration has been thought through.
Here's the thing that still trips people up: serverless doesn't mean stateless. You can have state — it just lives in external services like databases and object stores. Your serverless function is stateless, but it can talk to stateful things. That's the mental model shift. In the old world, your server had state — it held database connections, session data, file caches. In the serverless world, the function is pure compute, and everything stateful is externalized.
That externalization is what makes the architecture work at scale. If your function is stateless, you can run a thousand copies of it in parallel without them stepping on each other. If your state is in a database, you scale the database independently from the compute. The separation of concerns is cleaner, but it requires you to think differently about where your data lives and how it flows.
Daniel's question about data flowing securely to the frontend — this is the answer. In a well-architected serverless setup, data flows from the database to the serverless function to the rendering engine to the H T M L response. The frontend never talks to the database directly. The database credentials never leave the server-side environment. The only thing the user's browser receives is the rendered page. That's the security boundary, and it's the same boundary that exists in traditional server-rendered applications. The difference is that in the static model, even that data flow only happens at build time, not on every request.
We should talk about edge rendering, because it's the next evolution of this. Vercel and Cloudflare and Netlify are all pushing rendering at the edge — running your server-side rendering code in data centers close to your users, not in a single origin region. The promise is that you get the freshness of S S R with the speed of a global C D N. But edge rendering means your database needs to be globally distributed too, or you're adding latency between the edge function and a distant database. That's a hard problem, and it's one of the reasons static generation still wins on raw performance.
The physics of it are inescapable. If your database is in us-east-one and your user is in Sydney, a server-side render is going to take at least the round-trip time between Sydney and Virginia plus the query time plus the render time. A static file served from a Sydney C D N node takes single-digit milliseconds. No amount of edge function optimization can beat the speed of light.
That's why the hybrid approach is so compelling. You serve static files from the C D N for the pages that don't change often. You use I S R for pages that change occasionally. You reserve true S S R for pages that need to be dynamic. You're not picking one strategy — you're picking the right strategy for each page based on its freshness requirements and traffic patterns.
To wrap up the core discussion: the two methods are static generation and server-side rendering, with incremental static regeneration as the bridge between them. Daniel's preference for the slower method is well-founded — it's more secure, more predictable, more portable, and cheaper at scale. The live-updating he experienced was I S R with on-demand revalidation, which gives you near-instant updates while still serving static files for most requests. And the whole ecosystem has matured to the point where you don't have to choose one strategy for your entire site — you can mix and match per page.
The deployment pain he described — that's real, but it's also a one-time cost. Once you understand the build pipeline and the platform's expectations, subsequent deployments are smooth. The coding agents help with the learning curve, but the underlying knowledge of what's happening during the build and how data flows is still worth having.
One last thought before we move on. Daniel said something interesting about coding agents being "supreme" for this kind of debugging. I think there's a deeper point here about the future of deployment. We might be heading toward a world where the build configuration is generated by the agent rather than written by hand. You describe what you want — "I have an Astro site with blog posts from Contentful, I want static generation for the blog with I S R for the homepage" — and the agent generates the framework config, the environment variables, the deployment settings.
That's already happening to some extent. Vercel's templates and Netlify's starters get you most of the way there. But the agent-generated config is only as good as the agent's understanding of the platform, and platform-specific quirks change faster than training data. I think we're still in the phase where you need to understand what the agent is doing, even if you're not writing every line yourself.
That's probably a good place to leave the technical discussion. The tools are getting better, but the fundamentals — static versus dynamic, build-time versus request-time, security boundaries and data flow — those haven't changed and probably won't. Understanding them is still the difference between a site that works and a site that works well.
Now: Hilbert's daily fun fact.
Hilbert: The average cumulus cloud weighs approximately one point one million pounds — roughly the same as one hundred elephants — and stays aloft because the weight is distributed across millions of tiny water droplets spread over a vast volume of rising warm air.
...right.
That's our episode. If you're building on serverless and trying to decide between static and server-rendered, the answer is probably both — static where you can, I S R where you need freshness, S S R where you need per-request dynamism. The frameworks support it, the platforms support it, and the security and cost benefits of leaning static are real.
If you're still in the hair-pulling deployment phase Daniel described — it gets better. The tooling improves every month, and the coding agents are helpful for the debugging cycle. Stick with it.
Thanks to our producer Hilbert Flumingtop, and thanks to DeepSeek V four Pro for the script. This has been My Weird Prompts. Find us at myweirdprompts.com or wherever you get your podcasts.
We'll be back soon with whatever Daniel sends us next.