#772: Beyond the Build: Can Static Sites Truly Scale?

Is your static site hitting a wall? Discover how modern frameworks handle thousands of pages without crashing your build pipeline.

0:000:00
Episode Details
Published
Duration
34:04
Audio
Direct link
Pipeline
V4
TTS Engine
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The evolution of web architecture has moved from the all-in-one "monolith" to a highly distributed, decoupled model. In the early days of the web, platforms like WordPress dominated by housing the database, logic, and display templates in a single box. While simple to deploy, these systems often struggled with traffic spikes, as every visitor triggered a fresh database query. Today, the industry has largely shifted toward "headless" systems, where content is managed in a database and delivered via API to a separate, high-performance front end.

The Limits of Static Generation

Static Site Generation (SSG) was the first major answer to the instability of monolithic sites. By building every page as a flat HTML file before a user even requests it, developers could offer unparalleled speed and security. However, as sites grow from hundreds to thousands of pages, a new problem emerges: the physics of data.

When a build process has to pull thousands of entries from a database and churn them into files, it hits a "memory wall." Standard build environments have finite RAM and time limits. If a site grows too large, the build process may crash or take so long that simple updates, like fixing a typo, become logistically impossible. This creates a bottleneck where the simplicity of static files begins to buckle under the weight of the content archive.

Hybrid Solutions and Islands Architecture

To solve the scaling issue, the industry has moved toward a middle ground known as Incremental Static Regeneration (ISR). Instead of rebuilding the entire site every time a change is made, ISR allows the system to pre-render only the most critical pages—such as the homepage or recent articles—while generating older archive pages on demand. Once an archived page is generated for the first visitor, it is saved as a static file for everyone else. This "stale-while-revalidate" logic ensures that the build process remains fast regardless of the total page count.

Furthermore, frameworks like Astro have introduced "islands architecture." This allows a page to remain 95% static while hosting small "islands" of dynamic interactivity, such as a search bar or a comment section. This prevents the common mistake of loading massive, monolithic JSON files for simple features, which can slow down both the build process and the user’s browser.

The Move to Edge Rendering

For the largest sites in the world, even hybrid static models may not be enough. The ultimate destination for massive scale is edge rendering. This involves running code on servers distributed globally, physically close to the user. By moving the "compute" part of the website to the edge of the network, developers can deliver dynamic, data-driven content with the same speed as a static file.

The transition from monolithic to headless, and eventually to edge-rendered architecture, represents a fundamental shift in how we think about the web. It is no longer about choosing between a database or a flat file; it is about choosing the right delivery method for the right piece of data at the right time.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #772: Beyond the Build: Can Static Sites Truly Scale?

Daniel Daniel's Prompt
Daniel
"Herman and Corinne, I’d like to discuss headless CMS and the concept of decoupling the back-end from the front-end. We currently use Astro, Neon, and Vercel for our website, where each new episode triggers a deployment that pulls data from the database into a JSON file at build time. With hundreds of episodes already, I’m wondering about the scalability of this process. How far can static site generation be scaled? Can it handle the vast amounts of content found on a large news site, or do static sites reach an inherent choke point where they collapse under their own weight, requiring a return to traditional designs like SQL databases?"
Corn
Herman, I was looking at our build logs the other day for the website, and it really hit me just how much data we are shoving through that pipeline every time we hit publish. It is February twenty-second, twenty-six, and we have been at this for a long time. It is like this invisible factory that springs to life in the cloud, assembles a thousand pieces of content, stitches them into HTML, and then just disappears back into the ether.
Herman
Herman Poppleberry here. And yeah, it is a bit of a miracle of modern engineering, right? We are living in this era where the way we build the web has been completely turned inside out. It is not just about a server sitting in a basement anymore, spinning a hard drive every time someone clicks a link. It is about these massive, distributed systems that piece together a site from a dozen different sources in the blink of an eye. But you are right to look at those logs, Corn. Those logs are the heartbeat of the operation, and sometimes they skip a beat.
Corn
It is fascinating, but it also feels a bit precarious when you stop to think about it. Today's prompt from Daniel is about exactly that. He is looking at our current setup, where we use Astro as our framework, Neon for our database, and Vercel for our hosting and deployment. He is asking the big question. How far can this actually go? We have over seven hundred episodes now. Every time we add one, the system has to wake up, pull all that data from the Neon database, churn it into a single J-S-O-N file, and rebuild the entire site from scratch. Daniel is wondering if static site generation has an inherent choke point. Do we eventually reach a weight where the whole thing just collapses under its own gravity and we have to go back to traditional, database-driven designs like the old days of the web?
Herman
That is such a fundamental question for the modern web. It is really the tension between the simplicity of static files and the infinite scale of a live database. And you know, I love this topic because it is where the high-level theory of web architecture meets the messy, gritty reality of having a lot of content. We are talking about the physics of data.
Corn
Right, because when we started, it was fast. A few dozen episodes? The build took maybe thirty seconds. Now, it is taking a bit longer. Not much, but you can see the trend line creeping up. So let's start with the basics for a second. When we talk about a headless content management system or decoupling the back end from the front end, what are we actually doing? Why did the industry move away from the old WordPress model where everything lived in one big box?
Herman
The traditional model, like WordPress or Drupal in their original forms, is what we call a monolithic architecture. Think of it like a giant Swiss Army knife. The database, the logic that handles the content, the admin dashboard, and the templates that show you the website are all tied together in one application on one server. When a user visits a page, the server wakes up, realizes it doesn't have the page ready, asks the database for the post, runs a bunch of P-H-P code to wrap it in some H-T-M-L, and then sends it back. It is simple to set up, but it is heavy. It means every single visitor triggers a database query and a server process.
Corn
Which is why those sites used to crash when they got a sudden spike in traffic. I remember the old Digg effect or the Reddit hug of death. The database would just give up because it couldn't handle ten thousand people asking for the same thing at the exact same millisecond.
Herman
Exactly. It was incredibly inefficient. So the headless approach says, let's separate those concerns. The content lives in a database or a specialized content management system. That is the back end. But it does not have any front end. It is just an application programming interface, or A-P-I, that spits out raw data. Then, you have a separate front end, like our Astro site, that calls that A-P-I, takes the data, and decides how to display it. The front end doesn't care how the data is stored, and the back end doesn't care what the website looks like.
Corn
And the static part of that, the static site generation, or S-S-G, is the process of doing all that work before the user even shows up.
Herman
Right. Instead of building the page when the user asks for it, we build it when the content changes. We generate every single page as a flat H-T-M-L file. Then we put those files on a global content delivery network, or C-D-N. When a user visits, they are just downloading a file that is already sitting there, waiting for them. No database query, no server logic, no waiting for a script to run. It is incredibly fast and virtually uncrashable because you are just serving files from the edge of the network.
Corn
But that is where Daniel's concern comes in. If you have ten thousand pages, you have to generate ten thousand files. And if you are pulling all your episode data into one big J-S-O-N file at build time, like we are doing with Neon and Astro, that file gets bigger and bigger. It is not just the number of files; it is the amount of data the build process has to hold in its head at once.
Herman
It does. And there are actually a few different walls you can hit. The first one is the memory wall. When the build process starts on a server like Vercel's, it usually loads that data into memory to process it. If your J-S-O-N file grows to be hundreds of megabytes because you have thousands of episodes with full transcripts, your build environment might just run out of R-A-M and crash before it even starts generating the H-T-M-L.
Corn
I have seen that happen with some older static site generators. They just hang and then error out with a memory limit exceeded message. It is like trying to read a thousand-page book in one second. Your brain just locks up. But with our current stack, we are using Neon for the database. That is a serverless Postgres database. It is incredibly scalable on the storage side. The issue isn't the database, it is the bridge, right?
Herman
Correct. Neon can hold terabytes of data. No problem there. It is built on a storage engine that separates storage from compute, so it can scale up to handle massive queries. The bottleneck is the build step on Vercel. Vercel is the platform that runs our build and hosts the site. They have limits on how long a build can take. On their standard plans, if your build takes longer than, say, forty-five minutes, they just kill the process. They assume something has gone wrong.
Corn
Forty-five minutes sounds like a lot for a podcast website, but if you are a news site like the New York Times or the Guardian, and you have decades of archives, tens of thousands of articles, and you are trying to rebuild the entire site every time a journalist fixes a typo in a breaking news story? You are going to hit that forty-five minute limit very fast. You can't wait an hour for a typo fix to go live.
Herman
Oh, you would hit it in the first week. A site of that scale cannot use pure, old-school static site generation for every single page. It is mathematically impossible to rebuild everything every time. But this is where the technology has evolved, especially in the last couple of years leading up to twenty-twenty-six. We are not stuck with just one or the other anymore. There is a middle ground called Incremental Static Regeneration, or I-S-R.
Corn
I have heard you mention that before. Is that what bridges the gap between the static world and the dynamic world?
Herman
It is. Think of it like a hybrid engine. With basic static generation, you build everything upfront. With I-S-R, you can tell the system to only build the most important pages upfront—maybe the homepage and the last twenty episodes—and then build the others on demand as people visit them. But here is the trick: once a page is built on demand, the system saves that version and serves it to the next person as a static file. It is "stale-while-revalidate" logic.
Corn
So it is like the best of both worlds. You get the speed of static files, but you don't have to build everything at once. The site grows organically based on what people are actually looking at.
Herman
Exactly. And for our podcast, as we scale to thousands of episodes, we could move toward that. We could pre-render the latest ten episodes so they are lightning fast, and let the older ones in the archive be generated the first time someone goes back to listen to them. It keeps our build times under five minutes regardless of how many thousands of episodes we have in the database.
Corn
That makes a lot of sense. But Daniel specifically mentioned the J-S-O-N file. Right now, our build process fetches the entire list of episodes from Neon and stores it in a single J-S-O-N file that the site uses for things like the search bar and the episode list. That feels like a specific kind of scaling problem. If that file is ten megabytes, every time someone loads the search page, they are downloading ten megabytes of text just to find one episode.
Herman
You are spot on. That is a classic mistake in early static site architecture. You think, oh, I will just put it all in one file for convenience. It is easy to code. But as you grow, that file becomes a massive weight. It slows down the user's browser and it slows down the build. A large news site would never do that. Instead, they would use a real search index, something like Algolia or Elasticsearch, or even a specialized vector database for A-I search.
Corn
Or even just a proper A-P-I call to the database.
Herman
Right. This is where the decoupling really shines. You can have a static front end for the article pages, but the search bar can be a dynamic component that talks directly to a search engine or the database. You don't have to be one hundred percent static. You can be ninety-five percent static and five percent dynamic where it matters. In twenty-twenty-six, we call this "partial hydration" or "islands architecture."
Corn
It is like that concept of islands of interactivity that Astro promotes. The page itself is a static rock, but you can drop a little island of dynamic code into it that handles things like search or user comments or a real-time poll.
Herman
Precisely. Astro's islands architecture is actually a perfect response to Daniel's concern. It allows you to keep the bulk of the site static while using small, focused bits of server-side logic where a static approach fails. You can even use "server islands" now, where the static parts of the page are served instantly, and the dynamic parts are streamed in a few milliseconds later.
Corn
So, let's talk about the absolute limit. If we imagine a site with a million pages. Is there a point where even the hybrid approach breaks down? Do we ever find ourselves saying, okay, this headless thing was a fun experiment, but we need to go back to a traditional S-Q-L driven monolithic design because the complexity of the decoupled setup is too much?
Herman
I think the answer is actually no, we don't go back to the old monoliths. But we do change how we use the headless tools. If you look at the absolute largest sites in the world—sites like Netflix or Amazon or the big news outlets—they are almost all decoupled. They have a massive content engine in the back end and a highly optimized delivery layer in the front end. The difference is that at that scale, they aren't using simple static files anymore. They are using what we call edge rendering.
Corn
Edge rendering. That sounds like one of those buzzwords that gets thrown around at tech conferences to make things sound more expensive. What does it actually mean in practice for a developer?
Herman
It means the code that builds the page is running on servers all over the world, very close to the user. When you visit the site from Jerusalem, a server in a data center right here in the city takes the request, grabs the data from a nearby cache, and builds the H-T-M-L in a few milliseconds. It is not a static file waiting on a disk, but it is not a heavy server in Virginia doing all the work either. It is a tiny, fast function running at the "edge" of the network.
Corn
So it is dynamic, but it is so fast and so close to the user that it feels static. It is like having a chef in every neighborhood instead of one giant central kitchen that has to ship food across the country.
Herman
Exactly. It scales horizontally. If a million people visit at once, they are being handled by thousands of tiny servers at the edge, rather than one big database server in the middle. So the return to traditional designs isn't really a return to the old way, it is an evolution into a more distributed way. We are using S-Q-L databases like Neon, but we are using them in a way that supports this distributed model.
Corn
That is a relief, honestly. I don't think I want to go back to managing a giant WordPress installation with fifty plugins and a database that chokes every time we get a mention on social media. I still have nightmares about white screens of death and database connection errors.
Herman
None of us do. But Daniel's point about the J-S-O-N file is a great warning. We should probably look at how we are handling that. If we keep adding episodes for the next ten years, that one file is going to become a problem. We should probably move that to a proper A-P-I or a paginated search system.
Corn
It is funny how these things sneak up on you. You build something that works perfectly for a year, and then suddenly the very thing that made it easy becomes the thing that breaks it. I guess that is just the nature of technical debt. It is not always bad code; sometimes it is just code that wasn't meant to handle success.
Herman
It is. And it is also the nature of growth. We are victims of our own success in that regard. Seven hundred episodes is a lot of data. It is a lot of metadata, a lot of transcripts, a lot of show notes.
Corn
Let's talk about the transcripts for a second. Those are huge. If we were to include the full text of every transcript in our build-time J-S-O-N file, we would have hit the wall months ago. Each transcript is what, five to ten thousand words?
Herman
Oh, absolutely. Multiply that by seven hundred and fifty-eight episodes, and you are looking at several million words. That is several dozen novels worth of text. If you tried to load that into a single JavaScript object during a build, you would see the system just choke. It would be like trying to swallow a whole watermelon.
Corn
So how do we handle that? Because we do want the site to be searchable. We want people to be able to find that one specific thing we said about, I don't know, the history of the toaster back in episode three hundred.
Herman
This is where we use an external search index. We don't pull the transcripts into the site's code at all. Instead, when we publish an episode, we send the transcript to a service like Algolia. They index it and provide a fast A-P-I. When you type "toaster" into the search bar on our site, your browser sends a tiny request to Algolia, they tell you which episodes have that word, and our site just shows the links. The heavy lifting of searching through millions of words never happens on our server or in our build process.
Corn
That seems to be the recurring theme here. Don't try to make one tool do everything. Use the database for storage, use the static generator for the structure, and use specialized services for the heavy lifting like search or image processing.
Herman
Exactly. Decoupling isn't just about the back end and the front end. It is about decoupling the different types of data and how they are used. Static site generation is great for things that don't change often, like the text of a blog post. But it is terrible for things that are huge and need to be searched through in real time.
Corn
So, if Daniel were building a large news site today, in early twenty-twenty-six, what would his stack look like? Would he still use Astro and Vercel?
Herman
He could, absolutely. But his strategy would be different. He would use Astro's server-side rendering mode, or S-S-R, for the article pages, combined with a very aggressive caching layer. So, the first time someone reads an article, the server builds it. The next ten million people get the cached version from the edge. That gives you the scale of a news site with the speed of a static site. He would also use a database like Neon because it allows for branching.
Corn
Branching? Like in Git?
Herman
Exactly. Neon lets you create a copy of your database in seconds so you can test new features or content structures without touching the production data. For a large news site with dozens of developers, that is a game changer. It means you can't accidentally break the live site while you are trying to fix a bug in the archive.
Corn
And he would probably use a more robust C-M-S than just a few local files. He would use something like Contentful or Sanity or maybe a self-hosted Payload C-M-S, which are designed to handle thousands of users and complex workflows.
Herman
Right. Those are headless content management systems. They are built specifically to be the source of truth that feeds into these decoupled front ends. They have their own internal scaling logic, so they don't care if you have ten articles or ten million. They just provide the A-P-I.
Corn
I think one of the things that worries people about this approach is the complexity. If you have a database, a C-M-S, a search index, a build server, and a content delivery network, that is a lot of moving parts. If any one of them breaks, the whole site might go down, or at least become outdated. It feels like there are more points of failure.
Herman
It is a valid concern. The old monolith was simpler to reason about. If the server was on, the site was up. If the server was off, it was down. Now, you can have a situation where the site is up, but the search is broken, or the content is three days old because the build failed. It is a more fragmented kind of failure.
Corn
I have definitely seen that happen. We have had builds fail because of a tiny syntax error in a new episode's metadata—maybe a missing quote mark—and we didn't notice for a few hours. The site looked fine, but the new episode just wasn't there. It was a silent failure.
Herman
That is the trade-off. You are trading architectural simplicity for performance and scalability. For a small personal blog, the headless approach might actually be overkill. WordPress is still great for a lot of people. But for anything that needs to scale, or anything where performance is a top priority, the complexity is worth it. You just have to build in better monitoring.
Corn
It is like moving from a single-engine plane to a commercial jet. There are a lot more things that can go wrong in a jet, but you can carry hundreds of people across the ocean. You can't do that in a Cessna. You need the complexity to get the capability.
Herman
That is a perfect analogy. And just like a jet, we have systems to monitor all those parts. We have automated tests that run before a build, we have alerts that tell us if a build fails, and we have fallbacks in case a service goes down. If our search service goes down, the search bar might disappear, but the rest of the site stays up. That is "graceful degradation."
Corn
So, to Daniel's question about the inherent choke point. Is there a physical limit to how many files a file system can handle? If we had a billion episodes, would Vercel's servers just run out of disk space?
Herman
Technically, yes, every file system has a limit. But practically, for the web, we would hit other limits long before that. We would hit the limit of how long it takes to upload those files or how long it takes to index them. But again, that is only if you are trying to be purely static. Once you move to that hybrid edge rendering model, those limits basically disappear. The internet itself is just a massive collection of distributed files and services. If you build your site to work like the internet works, you can scale almost indefinitely.
Corn
That is a very profound way to look at it. Build your site to mirror the architecture of the network it lives on. Don't fight the medium.
Herman
Precisely. The web was always meant to be decentralized. The move toward monolithic servers in the two-thousands was actually a bit of a historical detour because it was easier to build in the early days. Now we are finally getting back to the original vision of a truly distributed system where content is everywhere and nowhere at the same time.
Corn
I want to circle back to the S-Q-L database part of Daniel's prompt. He mentioned Neon and the idea of returning to traditional designs. Do you think we are seeing a bit of a resurgence in S-Q-L because of how well it handles these decoupled setups?
Herman
Oh, absolutely. For a few years, everyone was obsessed with No-S-Q-L databases like MongoDB because they were easy to scale for certain types of data. But S-Q-L, and specifically Postgres, which is what Neon uses, has made a huge comeback. It turns out that having a rigid structure for your data is actually really helpful when you are trying to pipe it into a dozen different front ends. You need that consistency.
Corn
It gives you a single, reliable source of truth.
Herman
Exactly. And with things like Neon, we get the best of both worlds. We get the reliability and structure of Postgres, but with the serverless scaling that we used to only get from No-S-Q-L. It can scale to zero when no one is using it, and scale up instantly when we are running a massive build. It is a very cool time to be building this stuff. The tools are finally catching up to our ambitions.
Corn
It feels like we are in this golden age of web infrastructure. The tools are so much more powerful than they were even five years ago. I remember trying to set up a static site generator back in twenty-fifteen, and it was a nightmare. You had to manage your own Ruby environment, your own build scripts, and then manually upload files via F-T-P. If one thing broke, you spent the whole day in the terminal.
Herman
Ugh, don't remind me. It was miserable. I spent more time fixing my build environment than I did writing content. Now, we just push our code to GitHub and the whole machine takes care of the rest. It is almost too easy. It makes you forget how much complexity is hidden under the hood.
Corn
Which I think is why people like Daniel are asking these questions. When things work too well, you start looking for the catch. You start wondering where the edge of the cliff is. You think, "This can't be this easy forever."
Herman
And it is good to know where the cliff is. For us, the cliff is that build-time J-S-O-N file. We are probably fine for another few hundred episodes, but eventually, we will need to change how we handle that. But the beautiful thing about our setup is that we can change that one piece without rebuilding the whole site. We can just swap out the search component and leave everything else exactly as it is. That is the beauty of modularity.
Corn
That is the real power of decoupling. It is not just about scale, it is about flexibility. It is about being able to evolve the site piece by piece as your needs change. You aren't locked into a single vendor's way of doing things.
Herman
Exactly. It is like a Lego set. If you don't like the arm on your robot, you don't have to throw the whole robot away. You just pull the arm off and build a better one. In the old monolith world, the arm was welded to the body.
Corn
I think that is a great place to wrap up the technical side of this. But I want to talk about the practical takeaways for someone who might be starting a project today. If you are a developer or a content creator and you are looking at these options, how do you decide which path to take?
Herman
My advice is always to start as simple as possible but with an eye toward the future. If you are just starting a blog, use something like Astro with a simple markdown setup. It is fast, it is free to host on Vercel or Netlify, and it gets you ninety percent of the way there without any database at all.
Corn
And then when you hit a hundred posts? Or when you want to start adding more complex metadata?
Herman
Then you might want to move those markdown files into a headless C-M-S like Contentful or a database like Neon. That frees you from the limitations of the file system and gives you more powerful ways to manage your content. You can start using A-P-I calls instead of local file reads.
Corn
And if you become the next big news site? If you are getting millions of hits a day?
Herman
Then you start looking at Incremental Static Regeneration and edge rendering. You move your search to a dedicated service. You optimize your images with a service like Cloudinary. You keep the core of your site decoupled, but you use more specialized tools for each job. You move from the Cessna to the jet.
Corn
It is a gradual progression. You don't have to build the commercial jet on day one. You can start with the Cessna and add better engines as you go. The important thing is to use a framework that allows for that growth.
Herman
Precisely. Just don't wait until the wings are falling off to start thinking about the upgrade. Keep an eye on your build times. Look at your bundle sizes. If that J-S-O-N file starts taking more than a second to load on a mobile phone, it is time to rethink your strategy.
Corn
I think we should take a look at our own build times this afternoon, Herman. Just to be safe. I don't want to be the one whose wings fall off mid-flight.
Herman
Way ahead of you, Corn. I have a dashboard for that. We are currently averaging about three minutes and twelve seconds for a full rebuild. We have plenty of headroom left. We are nowhere near the forty-five minute limit.
Corn
Three minutes? That is not bad at all. I thought it was longer because it feels like forever when I am waiting to see a typo fix go live.
Herman
Astro is incredibly efficient. It only processes the things that have actually changed, and it is built on top of some very fast underlying tools like Vite and Esbuild. We are in good shape for a while. We could probably double our content and only add thirty seconds to the build.
Corn
That is good to hear. I feel a lot better about our seven hundred and fifty-eight episodes now. We could probably get to seven thousand before we really have to worry about the "collapse" Daniel mentioned.
Herman
Well, let's not get ahead of ourselves. Seven thousand episodes would be... what, another twenty years of podcasting? I might be retired by then, Corn.
Corn
Something like that. I think we can manage it. We will just be the two oldest guys in the metaverse talking about ancient history from twenty-twenty-four.
Herman
I hope so. I am enjoying the ride. And I love that our listeners are thinking about this stuff too. It is one thing to just listen to a show, it is another to think about the digital architecture that brings it to your ears. It shows a level of curiosity that I really appreciate.
Corn
It really is. And it is a testament to the community we have built here. People aren't just here for the weird prompts, they are here for the deep dives into how the world works, whether that is the history of a kitchen appliance or the mechanics of a headless C-M-S. It is all part of the same curiosity.
Herman
It is all connected, isn't it? The way we organize our information is just as important as the information itself. The medium is the message, as they say.
Corn
Absolutely. Well, I think we have covered a lot of ground today. From the basics of decoupling to the cutting edge of edge rendering. Daniel, I hope that answers your question. Static sites are incredibly powerful, and while they do have limits, the modern web has found some very clever ways to move those limits further and further out. You are not going to hit a wall anytime soon if you use the right tools.
Herman
Yeah, don't fear the static. Just be ready to evolve when the time comes. The web is flexible, and your architecture should be too.
Corn
Before we go, I want to say a huge thank you to everyone who has been following along with our journey. We have been doing this for a long time now, and your support is what keeps us going. If you are enjoying the show, we would really appreciate it if you could leave us a review on your podcast app or on Spotify. It genuinely helps other people find us and keeps the show growing.
Herman
It really does. Every rating and review makes a difference in the algorithms, and we love reading what you have to say. It is the best way to give us feedback.
Corn
You can find all our past episodes, including those seven hundred odd ones we mentioned, at our website. We have an R-S-S feed there for subscribers and a contact form if you want to get in touch with your own weird prompts. You can also reach us directly at our show email address.
Herman
We are also available on Apple Podcasts, Spotify, and basically anywhere you get your podcasts. We are everywhere.
Corn
Thanks again for joining us. This has been My Weird Prompts.
Herman
Until next time, I am Herman Poppleberry. Keep asking the weird questions.
Corn
And I am Corn. We will see you in the next episode. Goodbye!

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.