#2503: Inside an API Request: DNS to Response

What really happens when you press Enter on a URL? From DNS to TLS to headers, we break down the full lifecycle.

0:000:00
Episode Details
Episode ID
MWP-2661
Published
Duration
27:35
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

Inside an API Request: What Actually Happens When You Press Enter

Every time you type a URL and press Enter, or fire off an API call from Postman, a surprisingly complex cascade of events unfolds before a single byte of data comes back. Most developers operate on a fuzzy mental model of this process — knowing requests happen, but hand-waving the details. Let's get concrete.

The DNS Prelude

The browser needs an IP address before it can talk to any server. It doesn't just ask once — it checks its own cache first, then the operating system's cache. If both miss, it queries a DNS resolver over UDP on port 53. That resolver performs a recursive lookup through root servers, top-level domain servers, and the authoritative name server for that domain. Each step narrows the scope until an IP address is returned, which gets cached according to its TTL (time to live).

The TTL value is quietly important. A short TTL means DNS changes propagate quickly — useful for failover scenarios — but increases lookup frequency. A long TTL means faster subsequent loads but slower recovery if something changes. There's an entire operational philosophy baked into that single number.

TCP, TLS, and the Handshake Gauntlet

With an IP address in hand, the browser initiates a TCP three-way handshake: SYN, SYN-ACK, ACK. Three packets establish a reliable, full-duplex channel. This adds one round trip of latency before any actual data moves.

Then comes TLS. The client sends a ClientHello with supported versions and cipher suites. The server responds with its chosen parameters and sends its certificate — signed by a certificate authority — containing its public key. The client verifies the certificate chain, then generates a pre-master secret, encrypts it with the server's public key, and sends it. Both sides derive symmetric session keys from that secret and the exchanged random numbers. Everything from that point forward is encrypted.

TLS 1.3 streamlined this significantly with fewer round trips and Zero-RTT resumption — but that speed comes with replay attack risks, which is why it should only be used for idempotent requests like GETs, not POSTs that charge credit cards.

The HTTP Request: Headers, Bodies, and Versions

The actual HTTP request structure is deceptively simple: a request line (method, path, version), key-value headers, an empty line, and an optional body. But headers are doing far more work than most people realize.

The Host header tells the server which domain you want — critical when one server hosts multiple domains. Content-Type tells the server how to interpret the body (JSON, form-encoded, multipart). Authorization carries credentials. And newer headers like Fetch Metadata (Sec-Fetch-Site, Sec-Fetch-Mode) let servers reject suspicious requests before they hit application logic. Client Hints provide device information more privacy-consciously than User-Agent strings.

The body varies by use case. JSON is the modern default — human-readable and universally supported. Form data still matters for web forms, and multipart form data handles file uploads cleanly.

Authentication and the Parameter Taxonomy

Parameters can go in four places: query parameters (filtering, search, pagination), path parameters (resource identification), header parameters (metadata, auth), and body parameters (the actual payload). Choosing wrong creates confusion and security issues — authentication tokens in query parameters, for example, end up in log files and browser history.

Authentication methods range from simple API keys sent in headers to more complex token-based systems. The key insight: the browser doesn't care about your organizational chart. If your frontend can call an endpoint, anyone who inspects the network tab can find it and try to call it themselves.

The Translation Layer

One crucial point often missed: when you write code or use Postman, you're thinking in HTTP/1.1 semantics — methods, paths, headers as text. But the actual bytes on the wire might be HTTP/2 binary frames or QUIC streams over UDP. Browser DevTools shows the semantic representation, not the raw wire format. The abstraction is so clean that most developers never need to think about the wire — until something breaks, and understanding what's actually happening underneath becomes essential.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2503: Inside an API Request: DNS to Response

Corn
Daniel sent us this one — and he's basically asking us to pull out a microscope and look at what's actually happening when you press Enter on a URL or fire off an API call. He says whenever he opens Postman and stares at custom headers, form data, authentication tokens, the whole thing kind of washes over him. So he wants us to walk through the anatomy of a modern API request — the cascade from DNS to response, what those headers actually mean, how authentication works, and why any of this matters when we're talking about the magic that happens between your browser and a server. There's a lot to unpack here.
Herman
This is genuinely one of those topics where most people — even developers — operate on a kind of fuzzy mental model. They know requests happen, they know headers exist, but the actual structure and lifecycle is just hand-waved away. It's worth getting concrete.
Corn
Before we dive in though — quick note. Today's episode script is being generated by DeepSeek V four Pro. So if anything lands particularly well, credit where it's due.
Herman
All right, let's start with the cascade. When Daniel says "press Enter on a URL," what actually kicks off? Because there's a whole symphony before a single byte of HTML or JSON ever comes back.
Corn
And I think most people imagine it's just "browser asks server, server responds." But the prelude is where a lot of the interesting decisions get made.
Herman
Step one is DNS. The browser needs an IP address, and it doesn't just ask once. It checks its own cache first — has it resolved this domain recently? Then the operating system cache. If both miss, it queries a DNS resolver, typically your ISP's, over UDP on port fifty-three. That resolver does a recursive lookup through root servers, then top-level domain servers, then the authoritative name server for that domain. Each step narrows the scope until you get an IP address, which then gets cached according to its TTL — time to live. If the whole chain fails, you get a resolution error and the browser never even attempts a connection.
Corn
The TTL thing is quietly important. If a site is doing a DNS failover — maybe they're switching between servers for maintenance — a short TTL means the new IP propagates quickly. A long TTL means faster subsequent loads but slower recovery if something changes. There's a whole operational philosophy baked into that number.
Herman
And DNS is one of those protocols that feels ancient but is absolutely foundational. It runs over UDP by default — no handshake, no guaranteed delivery — which is why it's fast but also why DNS spoofing and cache poisoning have been persistent attack vectors. There's DNS over HTTPS now, which encrypts the query, but that's a whole separate conversation.
Corn
Now we have an IP address. What's next?
Herman
TCP three-way handshake. The browser sends a SYN packet to port four forty-three — that's the HTTPS port — the server responds with SYN-ACK, and the browser sends ACK back. Three packets, and now you have a reliable, full-duplex channel. The sequence numbers in those packets are what let TCP reassemble things in order and detect packet loss. It's elegant, but it also adds latency — one round trip before any actual data moves.
Corn
This is where the difference between HTTP versions starts to matter, right? Because the handshake overhead gets compounded if you're opening and closing connections constantly.
Herman
In the old HTTP one point zero days, every request was a new TCP connection — handshake, request, response, tear it down. HTTP one point one introduced persistent connections with keep-alive, so you could reuse the same connection for multiple requests. But you still had head-of-line blocking — one slow request held up everything behind it. HTTP two solved that with multiplexing over a single TCP connection, using binary framing instead of the old text-based request line. And HTTP three goes even further — it ditches TCP entirely and runs over QUIC, which is built on UDP. That eliminates TCP-level head-of-line blocking and reduces connection establishment latency significantly.
Corn
Here's the thing that I think trips people up. When you're writing code or looking at Postman, you're thinking in HTTP one point one semantics — methods, paths, headers as text. But the actual bytes on the wire might be HTTP two binary frames or QUIC streams. The browser's DevTools shows you the semantic representation, not the raw wire format. There's a translation layer.
Herman
That's such a good point. The abstraction is so clean that most developers never need to think about the wire format. But when you're debugging something weird — latency spikes, connection resets, strange ordering — understanding what's actually happening underneath becomes essential. The tools lie to you in a helpful way.
Corn
All right, so DNS resolved, TCP connected. Now we hit TLS.
Herman
The TLS handshake is where things get cryptographically interesting. The client sends a ClientHello — here's what TLS versions I support, here are my cipher suites, here's a random number. The server responds with ServerHello — I choose this version, this cipher suite — and then sends its certificate. That certificate is signed by a certificate authority, and it contains the server's public key. The client verifies the certificate chain — is this CA trusted, is the certificate expired, does the domain match? If all checks pass, the client generates a pre-master secret, encrypts it with the server's public key, and sends it. Both sides then derive symmetric session keys from that pre-master secret and the random numbers they exchanged. From that point on, everything is encrypted with those session keys.
Corn
TLS one point three streamlined this significantly. Fewer round trips, and it cut out older, weaker cipher suites entirely. Zero-RTT resumption means if you've connected to a server before, you can send encrypted data in the very first packet. It's faster, but it does introduce some replay attack risks that the spec has to account for.
Herman
And there's a real tension here between speed and security. Zero-RTT data can be replayed by an attacker, so it should only be used for idempotent requests — requests that produce the same result no matter how many times they're sent. GET requests are fine. POST requests that charge a credit card are not. The server can enforce this, but it's one of those subtle things that application developers rarely think about.
Corn
Now we've got an encrypted tunnel. DNS done, TCP done, TLS done. Now the actual HTTP request goes through. Let's talk about what that looks like.
Herman
In HTTP one point one, the request starts with a request line — method, path, protocol version. So something like GET slash api slash users HTTP slash one point one. Then headers, which are key-value pairs separated by colons. Then an empty line, then an optional body. That's the structure. It's almost absurdly simple when you look at it.
Corn
The headers are where all the metadata lives, and that's where things get rich. What are the ones that actually matter?
Herman
The Host header is mandatory in HTTP one point one — it tells the server which domain you're trying to reach, which matters because a single server might host multiple domains. Content-Type tells the server how to interpret the body — is it JSON, form-encoded data, multipart form data with file uploads? Accept tells the server what format the client wants back. Authorization carries credentials — Bearer tokens, API keys, Basic auth credentials. User-Agent identifies the client. Content-Length tells the server how many bytes to expect in the body.
Corn
Getting Content-Type wrong is probably the single most common API debugging headache. You send JSON but declare it as form data, the server parses it wrong, and you get a cryptic four hundred Bad Request. Postman tries to help by auto-detecting, but it's not magic.
Herman
There's also a set of newer headers that are worth knowing about. The Fetch Metadata headers — Sec-Fetch-Site, Sec-Fetch-Mode, Sec-Fetch-Dest — these tell the server about the request context. Is this a same-origin request, a cross-site request, a navigation, a resource load? Servers can use these to reject suspicious requests before they even hit application logic. It's a defense-in-depth mechanism that's been rolling out across browsers over the past few years.
Corn
I hadn't dug into those. So the browser is essentially annotating every request with enough context for the server to say "this fetch from a cross-site context looks sketchy, I'm rejecting it.
Herman
And then there are Client Hints — Sec-CH-UA, Sec-CH-UA-Platform — which provide device information in a more privacy-conscious way than the old User-Agent string. The server requests what it needs, the browser sends only that. And the Priority header, RFC nine two one eight, lets the client signal how urgent a resource is. A render-blocking CSS file gets high priority, a below-the-fold image gets low priority. This helps the server and any intermediate proxies allocate bandwidth intelligently.
Corn
Headers are doing a lot more work than people realize. They're not just metadata — they're a negotiation layer. The client declares capabilities and preferences, the server decides how to respond.
Herman
The body is where things get even more varied. JSON is the default now — Content-Type application slash JSON — and it's human-readable, easy to parse, universally supported. But form data still matters. Application slash x-www-form-urlencoded is the classic web form format — key equals value pairs separated by ampersands, with special characters percent-encoded. It's simple but limited. Multipart form data is for file uploads — each part has its own headers and content, separated by a boundary string. It's more complex but handles binary data cleanly.
Corn
Then there's the parameter taxonomy. Query parameters, path parameters, header parameters, body parameters. Four different places to put data, and choosing wrong creates all sorts of problems.
Herman
Query parameters sit in the URL after the question mark — great for filtering, searching, pagination. They're visible, bookmarkable, cacheable. Path parameters are part of the URL structure — slash users slash one two three identifies a specific resource. They're for routing, not optional filters. Header parameters carry metadata and auth — things that apply to the whole request. Body parameters are for the actual payload — creating or updating a resource. Mixing these up leads to APIs that are confusing to use and hard to maintain.
Corn
I've seen APIs that put authentication tokens in query parameters, and every time I see that I wince. URLs get logged, cached, shared. That token is now in a dozen log files and your browser history.
Herman
That's a perfect segue into authentication. Let's talk about how API calls prove who they are.
Corn
Daniel specifically mentioned authenticated versus unauthenticated requests, and I think that distinction is more interesting than people realize. An unauthenticated request isn't necessarily public — it might be an internal API that assumes it's only called from the same origin, or it might rely on a cookie that the browser sends automatically.
Herman
This connects back to something we talked about before — the collapse of the distinction between internal and external APIs. The browser doesn't care about your organizational chart. If your frontend can call an endpoint, anyone who inspects the network tab can see that endpoint and try to call it themselves.
Corn
Which is why authentication isn't just about "is this user logged in" — it's about "do I trust this request at all." Let's walk through the common methods.
Herman
API keys are the simplest. It's a static string — basically a long password — that gets sent in a header, usually Authorization colon Bearer some-key. The server checks it against a database or validates its format. Simple to implement, simple to use. But it has no concept of user identity — the key represents an application or a project, not a person. And if the key leaks, anyone can use it until it's revoked. There's no expiration built in.
Corn
They leak constantly. People commit them to GitHub, they show up in client-side JavaScript, they get logged accidentally. There are automated scanners that trawl public repos for API keys twenty-four seven.
Herman
Bearer tokens, especially JWT — JSON Web Tokens — are the next step up. After a user logs in, the server issues a signed token that contains claims — user ID, expiration time, permissions. The client sends it in the Authorization header. The server validates the signature and checks the expiration without needing to hit a database. It's stateless, which makes it great for microservices and distributed systems. But the trade-off is that anyone with the token can use it until it expires. There's no built-in revocation mechanism — if a token gets stolen, you either wait for it to expire or you maintain a blacklist, which defeats the statelessness.
Corn
JWT size is also a real issue. If you stuff too many claims in there, the token gets large, and it's sent with every request. On a mobile connection, that adds up. I've seen tokens that are several kilobytes — that's more overhead than the actual API response in some cases.
Herman
OAuth two point zero is the industry standard for delegated access. The idea is that a user can grant a third-party application access to their data without sharing their password. The flow involves redirects, authorization codes, and token exchanges. The Authorization Code flow with PKCE — proof key for code exchange — is the current best practice for mobile and single-page apps. The older Implicit flow is deprecated because it exposed tokens in the URL fragment. Client Credentials is for machine-to-machine communication — no user involved.
Corn
OAuth is powerful but it's also notoriously complex to implement correctly. The spec is long, the terminology is dense, and getting the redirect URIs right is a constant source of security vulnerabilities. I've seen production systems that accepted any redirect URI because the developer couldn't figure out how to validate it properly.
Herman
That's where the industry is converging on a pragmatic middle ground. Use API keys for internal services and quick prototypes where the blast radius is small. Use Bearer tokens, often JWT, for user-facing production APIs. Use OAuth when you need third-party access or complex delegation. And never, ever use Basic Auth in production — it sends base-sixty-four encoded credentials, which is essentially plaintext over HTTPS, and offers no expiration or scope limitation.
Corn
The "over HTTPS" part is doing a lot of work there. All of these methods assume the transport is encrypted. Send a Bearer token over plain HTTP and it's visible to every router between you and the server. TLS isn't optional anymore.
Herman
There's also mTLS — mutual TLS — where both client and server present certificates. It's used in zero-trust architectures and service mesh setups. The client certificate proves identity at the transport layer before any HTTP happens. It's extremely secure but operationally complex — certificate management at scale is non-trivial.
Corn
Let's talk about the response side. The request goes out, what comes back?
Herman
Status line first — HTTP version, three-digit status code, status message. Two hundred OK means success. Two zero one Created means a resource was created. Four hundred Bad Request means the server couldn't parse what you sent. Four zero one Unauthorized means you need to authenticate. Four zero three Forbidden means you're authenticated but not allowed. Four zero four Not Found. Five hundred Internal Server Error means something broke on the server side. These codes are part of the HTTP spec, and using them correctly makes APIs predictable.
Corn
People get four zero one and four zero three confused constantly. Four zero one is "I don't know who you are." Four zero three is "I know who you are, and you're not allowed to do this." It's a subtle distinction but it matters for debugging.
Herman
Response headers include Content-Type — is the body JSON, HTML, binary? Content-Length for the body size. Server header identifying the software. Cache-Control directives telling browsers and proxies how long to cache the response. And then the body — JSON for APIs, HTML for web pages, binary blobs for images and files.
Corn
One thing that's changed significantly is how cookies behave. Modern browsers default cookies to SameSite equals Lax — meaning cookies aren't sent on cross-site subrequests unless explicitly configured. If you're building an API that relies on cookies for authentication and you're calling it from a different domain, you need SameSite equals None with the Secure attribute. And there's a newer thing called CHIPS — Cookies Having Independent Partitioned State — that further isolates third-party cookies per top-level site. The cookie model is getting more restrictive, which is good for privacy but means developers need to understand these attributes.
Herman
This is where the gap between what Daniel described — opening Postman, looking at headers, feeling overwhelmed — and what's actually happening gets really interesting. When you load a modern website, your browser fires off dozens of API calls automatically. Analytics, CDN checks, authentication service pings, feature flag evaluations. Most of these are invisible to the user. They use cookies, lightweight tokens, or no authentication at all. The browser is a relentless API client that most people never see.
Corn
Postman is the opposite. You're consciously crafting one request at a time. You choose the method, set the headers, construct the body. It forces you to think about each element. The browser's cascade is automatic and opaque. The microscope Daniel mentioned — that's really what DevTools and Postman are. They make the invisible visible.
Herman
This connects to something Daniel mentioned in his prompt — MCP, the Model Context Protocol. This is Anthropic's open standard for connecting AI models to external tools and data sources. It essentially standardizes how an LLM makes API calls. The model doesn't construct raw HTTP — it uses MCP to describe what it wants to do, and the protocol handles the request formation.
Corn
The question becomes — if AI models are increasingly making API calls through abstraction layers like MCP, does understanding raw HTTP become more or less important for developers?
Herman
I think it becomes more important, but for a different reason. You're not writing the requests by hand anymore, but you absolutely need to debug what the AI is doing. When the model hallucinates an endpoint or constructs a malformed request, you need to look at the actual HTTP to understand what went wrong. The abstraction leaks, and when it leaks, you need to know what's underneath.
Corn
That's where I land too. The skill shifts from authoring to auditing. You're not crafting headers in Postman as often, but you're inspecting AI-generated requests in logs, tracing why a call failed, understanding the auth flow that the model got wrong. The foundational knowledge doesn't go away — it just gets applied differently.
Herman
There's a security angle. If AI models are making API calls on behalf of users, the authentication model gets more complex. You're not just authenticating a user — you're authenticating a user delegating to an AI agent that's acting on their behalf. That's OAuth territory, but with an additional layer of indirection. Who holds the token? What scope does it have? Can the agent request additional permissions? These are questions the industry is just starting to grapple with.
Corn
The other thing I think about is that MCP and similar protocols are essentially standardizing the semantic intent of an API call. The model says "I want to retrieve this resource" or "I want to create this record," and the protocol translates that into the appropriate HTTP request. But that translation layer has to make decisions — which method, which headers, how to handle errors. Understanding those decisions means understanding HTTP.
Herman
And errors are where it gets interesting. HTTP has a rich error signaling system — status codes, response headers, error bodies with machine-readable details. If an AI agent gets a four twenty-nine Too Many Requests, it needs to understand rate limiting and back off. If it gets a four zero one, it needs to re-authenticate. Teaching an AI to handle these gracefully requires that the humans building the system understand HTTP semantics deeply.
Corn
Let's bring this back to something concrete. If someone is looking at a request in Postman or their browser's network tab and they want to actually understand what they're seeing, where should they start?
Herman
Start with the request line. What method is being used? GET means read, POST means create, PUT means replace, PATCH means partial update, DELETE means remove. What's the path? Does it have query parameters after a question mark? Those are filters or options. Then look at the headers. Content-Type tells you what format the body is in. Authorization tells you how the request is proving its identity. Accept tells you what the client expects back.
Corn
Then look at the response. What's the status code? Two hundred means it worked. Four hundred means you did something wrong. Five hundred means the server broke. The response headers tell you about caching, content format, and server identity. The response body is the actual data.
Herman
If you're debugging, the single most useful thing is to compare a working request to a broken one. What header is different? What parameter is missing? Is the Content-Type wrong? Is the auth token expired? Most API debugging is just differential diagnosis — find the difference, fix the difference.
Corn
The other practical tip — use the browser's network tab to see the cascade. Load a page, open DevTools, go to the Network tab, and just watch. You'll see the initial HTML request, then the CSS and JavaScript files, then the API calls that fetch data. Each one is a full request-response cycle. Seeing the sequence makes the architecture visible.
Herman
Pay attention to the timing breakdown. DevTools shows you DNS lookup time, TCP connection time, TLS handshake time, time to first byte, content download time. If a request is slow, the timing breakdown tells you where the bottleneck is. Is DNS slow? Is the server taking forever to respond? Is the payload too large? The data is there.
Corn
One more thing — the Headers tab in DevTools shows you both request and response headers. But it also shows you "general" headers that apply to the entire transaction. Things like the request URL, the request method, the status code. It's a clean summary that Postman doesn't always give you.

And now: Hilbert's daily fun fact.
Corn
The shortest war in recorded history was the Anglo-Zanzibar War of eighteen ninety-six, which lasted between thirty-eight and forty-five minutes. The Sultan's forces surrendered after a British naval bombardment destroyed their palace and disabled their artillery.
Herman
If we're pulling practical takeaways from all of this, I'd say there are three things listeners can actually do. First, open your browser's network tab and look at the requests your own applications are making. You'll learn more about how your app works in ten minutes of inspection than in hours of reading documentation.
Corn
Second, if you use Postman or a similar tool, take the time to understand each field you're filling in. Don't just copy-paste headers from documentation — understand what Content-Type means, what Authorization is doing, why that particular parameter is in the query string versus the body.
Herman
Third, when something breaks — and it will break — start with the status code. It's not just a number. It's the server's first communication about what went wrong. Four zero one means check your auth. Four one five means check your Content-Type. Four twenty-nine means slow down. The status code is your first clue.
Corn
The bigger picture here is that HTTP is one of those technologies that's simultaneously simple and deep. The basics — method, path, headers, body — you can learn in an afternoon. But the subtleties — caching semantics, content negotiation, authentication flows, protocol version differences — those take years to master. And with AI agents increasingly operating at the HTTP layer, understanding what's actually happening on the wire is going to be a differentiator.
Herman
The abstraction is a gift, but the knowledge of what's underneath is the insurance policy. When the abstraction breaks — and it always breaks eventually — you need to be able to open the hood and diagnose the problem. That's the skill Daniel is reaching for, and it's absolutely the right instinct.
Corn
Thanks to our producer Hilbert Flumingtop for making this show happen. This has been My Weird Prompts. If you want more episodes like this one, head over to myweirdprompts dot com. We'll be back with another one soon.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.