#2496: Are Hidden API Endpoints Leaks or Just Plumbing?

When LLM agents discover unauthenticated JSON endpoints in browser DevTools, is it a security breach or just reading the page?

0:000:00
Episode Details
Episode ID
MWP-2654
Published
Duration
25:26
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

When Your Frontend Tells All: The Truth About Hidden API Endpoints

Every modern single-page app is a conversation between a browser and a server — but that conversation happens in plain view of anyone who opens the Network tab in DevTools. As LLM agents become capable of systematically inspecting these network requests, a question emerges that should give every web developer pause: are those undocumented JSON endpoints leaks, intentional public APIs, or just plumbing?

The Plumbing Problem

Architecturally, these endpoints are the infrastructure that makes single-page apps work. The browser loads a JavaScript bundle, which fires off fetch requests to backend endpoints, receives JSON, and renders it into the DOM. Every API call is visible in the Network tab by design — the frontend needs those endpoints to function. They're not hidden cryptographically; they're simply not advertised.

This creates a spectrum of risk. On one end, you have genuinely public data that simply lacks a developer portal. The European Air Quality Index API discovered by Rui Barros's agent is a perfect example: the data was always meant to be public, but there were no docs, no API key signup page, and no formal documentation. An agent simply found the machine-readable version of what was already human-readable.

On the other end, you have endpoints returning data the UI would never show to an unauthenticated user — but the endpoint itself has no auth check. That's not plumbing; that's a security bug. The OWASP API Security Top Ten lists Broken Object Level Authorization as the number one API vulnerability, and Broken Authentication as number two. When an endpoint returns data without checking who's asking, that's a real vulnerability class.

The Agent Changes Everything

The skill of manually reconstructing API calls from browser network logs — right-clicking, copying as cURL, pasting into a terminal, tweaking parameters by hand — is becoming obsolete. LLM agents never miss a request. They can paginate through hundreds of network entries, spot patterns a human would gloss over, and generate complete API clients in minutes.

Oz Tamir demonstrated this on a meme template website: Claude identified the API endpoints, mapped categories and JSON payloads, and generated a 300-line Python API client for a completely undocumented, unauthenticated API. That's reverse engineering as a service.

This changes the threat model from "one person poking around" to "automated, systematic endpoint enumeration." An agent can enumerate every endpoint your frontend touches, document its schema, and generate working client code — all without ever getting bored.

The Public-by-Default Imperative

For years, the implicit assumption in web development was: if it's not documented, it's private. The API docs were the contract; everything else was internal implementation detail. But single-page apps break that assumption completely. The frontend IS the documentation, whether you like it or not. Every endpoint the frontend calls is trivially discoverable.

The only sane posture in 2026 is to treat all frontend-consumed APIs as public by default. If your JavaScript can call it, assume the world can call it. Authenticate every request. Authorize every data access. Rate-limit aggressively. Strip sensitive fields at the server, not in the frontend — because the frontend is not a security boundary.

The Liability Question

When a human developer manually discovers an undocumented API, they bear responsibility for how they use it. They made a choice to poke around, to craft requests, to extract data. But when an LLM agent does it autonomously, who's responsible? The user who gave the prompt? The model provider? The developer who left the endpoint unauthenticated?

The old ethical frameworks — robots.txt, terms of service, the assumption of obscurity — don't cleanly map onto this new reality. As agents become more capable of discovering and consuming undocumented APIs, the web development community needs new conventions for what constitutes ethical access versus exploitation.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2496: Are Hidden API Endpoints Leaks or Just Plumbing?

Corn
Daniel sent us this one, and it's a good one — he's been playing with browser dev tools paired with Claude Code, and he noticed something that honestly should give every web developer pause. You open the Network tab, watch the XHR requests fly by, and suddenly you're staring at dozens of unauthenticated JSON endpoints sitting behind perfectly polished frontends. Stuff that was clearly never meant to be discovered by ordinary users, but nothing technically locks them down. So he's asking: what ARE these data streams from a security and architecture standpoint? Are they leaks, intentional public APIs, or just the natural plumbing of any modern single-page app? And if an LLM agent uncovers them, has it found an ethical hack, or just read the page properly?
Herman
Oh I love this question. And by the way, quick note — DeepSeek V four Pro is generating our script today, so if anything comes out unusually eloquent, that's why.
Corn
Although I'd argue we set a high bar.
Herman
You're very kind. But back to the prompt — this is something I've been thinking about ever since Chrome DevTools MCP dropped in beta last September. That was the moment this stopped being a manual developer workflow and became something an agent can do at scale, in minutes, without ever getting bored.
Corn
Right, so let's unpack what these endpoints actually are first, because I think most people who aren't building single-page apps don't realize how much of the internet is just JSON flying around behind the curtain.
Herman
So architecturally, these endpoints are the plumbing. A modern single-page app doesn't render HTML on the server. The browser loads a JavaScript bundle, the JavaScript fires off fetch requests to a bunch of backend endpoints, gets JSON back, and renders it into the DOM. Every API call is visible in the Network tab by design. The frontend needs those endpoints to function. They're not hidden in any cryptographic sense. They're just not advertised.
Corn
The first answer is: these aren't necessarily leaks. A lot of them are just the application doing its job. The European Air Quality Index API that Rui Barros's agent discovered — that's a perfect example. The data was always meant to be public. There just wasn't a developer portal. No docs, no API key signup page, nothing. But the data itself? Completely public by intent.
Herman
That's the benign end of the spectrum. Barros built a reusable Claude Code slash command — he called it slash discover hyphen A P I — that opens any public data portal, interacts with filters and pagination, monitors every network request, identifies endpoint patterns, and outputs complete documentation plus working client code. All in one shot. What used to take a developer an afternoon now takes minutes.
Corn
Here's where it gets interesting, and I think this is what Daniel's really poking at. There's a spectrum. On one end, you've got genuinely public data with no developer portal. On the other end, you've got endpoints returning data that the site's UI would never show to an unauthenticated user, but the endpoint itself has no auth check. That's not plumbing. That's a security bug.
Herman
The OWASP framework backs this up. Their API Security Top Ten lists Broken Object Level Authorization as API one colon twenty twenty-three — that's the number one API vulnerability — and Broken Authentication as API two. When an endpoint returns data without checking who's asking, that's a real vulnerability class, even if the URL was never documented anywhere.
Corn
The USPS had one of these. Predictable sequential IDs in an endpoint, no proper authorization check, and suddenly sixty million users' data is exposed. That wasn't a theoretical lab finding. That was production.
Herman
Stripe had a deprecated unauthenticated endpoint that could validate stolen credit cards. Deprecated, not removed, no auth — and suddenly you've got a card validation service for fraudsters. That's the kind of thing that keeps security engineers up at night.
Corn
The question is: when Claude Code or any LLM agent discovers one of these, what exactly has happened? Has it found a leak, or just read the page properly?
Herman
I think the answer depends on what the endpoint returns and whether there's any auth gate. If the endpoint returns data that any visitor to the site could see by clicking around in the UI, then the agent just read the page properly. It found the machine-readable version of what was already human-readable. That's not a hack. That's just... using a browser.
Corn
If the endpoint returns data that the UI intentionally hides — say, internal pricing data, other users' personal information, admin-level fields — and there's no authentication check on the endpoint itself, then the agent has uncovered a vulnerability. The data wasn't supposed to be accessible. The only thing "protecting" it was obscurity — the assumption that nobody would look at the Network tab and craft the right request.
Herman
Obscurity is not security. That's been a core principle for decades. But here's the twist that makes this moment different: LLM agents change the threat model from "one person poking around" to "automated, systematic endpoint enumeration." An agent never misses a request. It can paginate through hundreds of network entries. It can spot patterns a human would gloss over after twenty minutes of clicking.
Corn
The skill of manually reconstructing API calls from browser network logs — that's becoming obsolete, right? I remember watching developers right-click, copy as cURL, paste into a terminal, tweak parameters by hand. That whole workflow. Now the agent does it.
Herman
That's both a productivity win and a security nightmare, depending on which side of the fence you're on. Oz Tamir demonstrated this back in September twenty twenty-five on a meme template website. Claude identified the API endpoints, mapped categories and JSON payloads, and generated a roughly three-hundred-line Python API client for a completely undocumented, unauthenticated API. That's not a party trick. That's reverse engineering as a service.
Corn
Let's talk about the "copy as cURL" and "copy as fetch" features specifically, because these have been in every browser's dev tools for years, and they're the manual equivalent of what the agent is doing. You right-click any network request, and the browser exports it as a ready-to-run cURL command with all headers, cookies, and body included. It's been a standard developer workflow forever.
Herman
It's incredibly powerful. You get the exact request the browser made — every header, every cookie, the precise payload. If you're debugging an API integration, it's indispensable. But it also means that any authenticated request you make in your browser can be replayed by anyone who gets their hands on that cURL command. The session token is baked right in.
Corn
Which brings us to one of the scarier findings I've seen on this. Zenity Labs did a threat analysis of Claude in Chrome last December, and they flagged something that should make everyone pause. The agent has a read underscore network underscore requests tool that can expose OAuth tokens, session identifiers, private IDs — data the model would never otherwise see. And because Claude in Chrome is quote "always logged in by default," any action it takes carries the user's full authenticated session.
Herman
That's a novel attack surface. And they also found that the "ask before acting" safeguard — that little confirmation dialog — is a soft guardrail. Users get approval fatigue. They click "allow" without reading. Or worse, an attacker can craft a prompt injection that tricks the agent into doing something the user never intended.
Corn
Prompt injection plus dev tools access is a wild combination. Imagine a malicious ad on a page, or a compromised CDN script, or even a crafted API response — any of these could contain instructions that the agent reads and executes. Zenity called it "XSS-as-a-service" via prompt injection. The agent becomes an unwitting data thief, exfiltrating the very tokens and session data it was supposed to help you analyze.
Herman
This isn't hypothetical. That arXiv study from twenty twenty-five found that eighty-four percent of exposed API credentials appear in JavaScript resources — the exact files that browser DevTools and LLM agents inspect. So you've got credentials leaking into frontend code, and you've got agents that can systematically scan frontend code. Those two trends are on a collision course.
Corn
Let's pull back for a second and talk architecture, because I think there's a deeper shift happening that Daniel's question points to. For years, the implicit assumption in web development was: if it's not documented, it's private. The API docs were the contract. Everything else was internal implementation detail. But single-page apps break that assumption completely.
Herman
They do, because the frontend IS the documentation, whether you like it or not. Every endpoint the frontend calls is trivially discoverable. The Network tab reveals everything — URLs, parameters, response schemas, error messages. If your security model depends on nobody finding the endpoint, you've already lost.
Corn
Should developers treat all frontend-consumed APIs as public by default? Design every endpoint with proper auth, rate limiting, and field-level filtering, on the assumption that someone WILL find it and WILL try to abuse it?
Herman
That's the "public by default" mindset, and I think it's the only sane posture in twenty twenty-six. If your JavaScript can call it, assume the world can call it. Authenticate every request. Authorize every data access. Rate-limit aggressively. Strip sensitive fields at the server, not in the frontend. Because the frontend is not a security boundary.
Corn
Yet, we both know that's not how most applications are built. The backend team builds an internal API for the frontend team. Nobody documents it externally. Nobody puts it behind an API gateway with proper auth, because hey, it's just for our own frontend, right? And then the frontend ships, and suddenly that internal API is accessible to anyone who opens DevTools.
Herman
I've seen this pattern a hundred times. The classic example is the admin dashboard that's "hidden" behind a route that isn't linked anywhere in the UI, but the JavaScript bundle still contains the endpoint URLs, and the endpoints themselves have no server-side authorization. So anyone who reads the minified JS can find the endpoints and call them directly.
Corn
That billion-dollar legal AI tool that got reverse engineered last year — a hundred thousand plus exposed files via unauthenticated APIs and minified JavaScript leaking internals. That's not a small oversight. That's a systemic failure to treat frontend-consumed APIs as public surfaces.
Herman
It's not just about data exposure. There's a liability question here that I don't think anyone has really answered yet. When a human developer manually discovers an undocumented API, they bear responsibility for how they use it. They made a choice to poke around, to craft requests, to extract data. But when an LLM agent does it autonomously — who's responsible?
Corn
That's the liability inversion Daniel's getting at. Is it the user who gave the prompt? The model provider who built the agent? The developer who left the endpoint unauthenticated? This is uncharted legal territory.
Herman
It gets even murkier when you consider robots dot txt. That's been the traditional ethical boundary for web crawlers for decades. It signals site owner intent — what you're allowed to scrape and what you're not. But most SPA APIs aren't listed in robots dot txt at all. They're discovered dynamically through network inspection. The file never mentions them because the site owner never thought of them as "scrapable" resources.
Corn
The old ethical framework doesn't quite map onto the new reality. The consensus I've seen — and I think this is where most reasonable people land — is that the key boundaries are: respect rate limits, check terms of service, and don't bypass authentication. If an endpoint is unauthenticated and serves data that's publicly accessible through the normal UI, you're probably in the clear ethically, even if the site owner would prefer you didn't automate access.
Herman
If you're using an agent to enumerate endpoints at scale, finding ones that return data the UI hides, and then exploiting the lack of auth to extract data you couldn't get through normal use — that crosses a line. Even if the technical barrier was zero.
Corn
Let's talk about the dev tools themselves for a moment, because I think there's a generation gap here that's worth acknowledging. Older developers — and I'm including myself in this — we learned to use the Network tab as a debugging tool. You'd open it when something was broken. You'd look for the failing request, check the status code, inspect the response. It was reactive.
Herman
Whereas now, with LLM agents, the Network tab becomes a proactive discovery tool. You're not debugging. You're exploring. You're systematically mapping the API surface of an application you didn't build, extracting documentation that doesn't exist, and generating client code for endpoints that were never meant to be called by third parties.
Corn
The agents are better at it than humans. They don't get tired. They don't miss the subtle pattern in the forty-seventh request. They can paginate through hundreds of entries and spot that the product ID parameter is sequential and unauthenticated and returns full records even for IDs you haven't "purchased.
Herman
The Chrome DevTools MCP is what makes this seamless. It uses the Chrome DevTools Protocol — CDP — which is the same API that Chrome DevTools itself uses internally. So the agent isn't doing anything a human couldn't do. It's just doing it faster, more thoroughly, and with better pattern recognition. It can open pages, click buttons, fill forms, and monitor every XHR and fetch request in real time.
Corn
It's not just Chrome. Firefox has the same Network tab, the same copy-as-cURL functionality, the same inspectability. Safari's Web Inspector does too. Edge is Chromium-based so it's essentially identical to Chrome. This is a universal capability across every modern browser.
Herman
Which means there's no browser you can point users to that hides this information. The Network tab is a feature, not a bug. It exists because developers need it. But it fundamentally means that the client-server boundary in web applications is transparent. You cannot hide what the client sends and receives from a sufficiently motivated observer.
Corn
Where does this leave us practically? I think there are a few takeaways for different audiences. For developers building SPAs: treat your backend APIs as public infrastructure. Authorize every data access at the field level. Assume someone will find every endpoint and try every parameter combination.
Herman
Don't rely on minification or obfuscation to hide sensitive logic or credentials in your JavaScript. That arXiv study found eighty-four percent of exposed credentials in JS files. Minification doesn't help. It's trivial to beautify minified code, and LLM agents can read it either way.
Corn
For security researchers and bug bounty hunters: the LLM-plus-DevTools combination is a force multiplier. You can map an application's entire API surface in minutes, identify unauthenticated endpoints, and spot authorization gaps that would take hours to find manually. But with that power comes responsibility — responsible disclosure, respecting scope, not exfiltrating user data even if the endpoint lets you.
Herman
For everyone else — for the curious user who just discovered the Network tab exists — I think the ethical line is pretty clear. Looking is fine. The Network tab is there to be looked at. But what you do with what you find matters. If you discover an endpoint that returns other users' data with no auth check, the right thing to do is report it, not exploit it.
Corn
There's also a fascinating question here about whether LLM agents change the legal landscape around web scraping. The hiQ Labs versus LinkedIn case established that scraping publicly accessible data doesn't violate the Computer Fraud and Abuse Act, at least in the Ninth Circuit. But that was about data visible on public profiles. What about JSON endpoints that aren't linked anywhere but are technically unauthenticated?
Herman
That's the blurry line Daniel's asking about — "public data" versus "data you weren't supposed to find." And I don't think the courts have caught up. If an endpoint has no auth, no robots dot txt entry, no terms of service prohibition, and it's called by the frontend code that runs on every visitor's machine — is it really private? Or is it public by operation of how the web works?
Corn
I lean toward "public by operation." If you ship code to my browser that calls an endpoint, and that endpoint doesn't check who I am, you've effectively published that endpoint. You may not have meant to. You may not have documented it. But you published it.
Herman
Yet, I think most developers would feel violated if someone systematically enumerated their internal APIs and built a competing product on top of them. Even if no authentication was bypassed. There's a gap between what's technically possible and what feels like fair play.
Corn
That gap is exactly where the interesting conversations happen. And it's not going away. As more applications are built as SPAs with JSON backends, and as LLM agents get better at automated discovery, the gap between "what the developer intended" and "what the architecture actually allows" is going to keep widening.
Herman
One thing I want to flag — and this came up in the Zenity analysis — is that the "ask before acting" guardrail in these agents is not a security boundary. It's a user experience feature. If an attacker can craft a convincing prompt that makes the dangerous action seem benign, the user will click "allow." And once the agent has access to your authenticated session and your network requests, the blast radius is enormous.
Corn
There's a user education piece here too. If you're using an LLM agent with browser access, you need to understand that you're giving it the keys to your authenticated sessions. Every site you're logged into. The agent can see it all.
Herman
That's not a reason to avoid using these tools. They're incredibly powerful. But it IS a reason to be thoughtful about what you ask them to do, and on which sites, and with which accounts logged in.
Corn
Let's circle back to Daniel's core question, because I want to make sure we actually answer it. What ARE these data streams? They're the natural architecture of single-page applications — JSON APIs that exist to serve the frontend. Some are public data with no developer portal. Some are internal APIs that were never meant to be discovered. And some are outright security vulnerabilities where the lack of authentication exposes data that should be protected.
Herman
Has the agent uncovered an ethical hack, or just read the page properly? It depends entirely on what the endpoint returns and whether there's any authorization check. If the data is already visible in the UI, the agent just found the machine-readable version. That's reading the page properly. If the data is hidden in the UI but accessible through the API with no auth, the agent found a vulnerability. That's an ethical hack — or at least, it should trigger an ethical disclosure.
Corn
"copy as cURL" and "copy as fetch" are the manual versions of this same capability, just slower. They've been in every browser for years. What's new is the automation and the scale. An LLM agent can do in minutes what used to take an afternoon, and it can do it systematically across hundreds of endpoints without missing anything.
Herman
The practical implication for scraping and reverse engineering is that the barrier to entry has dropped dramatically. You no longer need to be a developer who understands HTTP headers and cookie management and response parsing. You can ask an agent in plain English: "find me the API this site uses, document it, and give me a Python client." And it will.
Corn
Which means the defensive posture has to change too. If you're building a web application today, you have to assume that every endpoint your frontend calls will be discovered, documented, and potentially called by third parties. Auth on everything. Rate limiting on everything. Field-level filtering on the server. Never trust the client to hide what it shouldn't show.
Herman
If you're using an LLM agent with browser access, understand what you're giving it. It sees your authenticated sessions. It sees your tokens. It sees network requests you might not even know are happening. Treat that access with the same care you'd treat sharing your unlocked phone with someone.

And now: Hilbert's daily fun fact.
Corn
The average cumulus cloud weighs about one point one million pounds — roughly the same as a hundred elephants floating above your head.
Corn
What can listeners actually do with this? If you're a developer, spend an afternoon auditing your own application's network traffic. Open DevTools on your own product, click around, and look at every XHR and fetch request. Ask yourself: if someone called this endpoint directly, without going through our UI, what would they get? Would they need authentication? Would they get data that belongs to other users? If the answers make you uncomfortable, fix the endpoints before someone else finds them.
Herman
If you're a security researcher or just technically curious, try the manual workflow first. Open the Network tab on a site you use regularly. Filter to XHR and fetch requests. Look at the response data. Right-click and copy as cURL, then replay it in a terminal. Get a feel for what's visible. Then, if you want to see what the automated version looks like, try an agent with browser access — but do it on a site you control, or a public data portal where there's no ambiguity about what's fair game.
Corn
If you find something that looks like a genuine vulnerability — an unauthenticated endpoint returning private user data, for example — report it responsibly. Most companies have a security contact or a bug bounty program. Don't exploit it. Don't extract data beyond what's necessary to demonstrate the issue. The fact that it's technically accessible doesn't make it yours.
Herman
For the broader industry, I think we're going to see a shift toward what I'd call "API-first security posture." Treat every backend endpoint as if it's part of your public API surface, even if it was only ever meant for your own frontend. Rate-limit it. Monitor it for abuse. Because the distinction between "internal" and "external" APIs has collapsed. The browser erased it.
Corn
The one forward-looking question I keep coming back to is this: as LLM agents get better at automated discovery, and as more applications expose JSON backends, are we heading toward a world where "undocumented API" is an oxymoron? Where the very concept of a private endpoint is meaningless because anything reachable will be found, documented, and catalogued by agents within hours of deployment?
Herman
I think we're already there. The question isn't whether your endpoints will be discovered. The question is whether you've designed them to handle what happens when they are.
Corn
Thanks to our producer Hilbert Flumingtop for keeping this show running. This has been My Weird Prompts. You can find every episode at myweirdprompts dot com. We'll catch you next time.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.