This episode tackles a frustration nearly every tech user knows: why can't services recognize a trusted workstation and leave you alone for weeks at a time? The answer is more layered than it seems. The security industry has spent two decades assuming every device is hostile, adopting zero-trust models designed for corporate networks and applying them to home desktops. That one-size-fits-all approach creates authentication fatigue—which ironically degrades security, as users stop scrutinizing prompts and just click through. The episode examines concrete alternatives: hardware-bound sessions via FIDO2 resident keys, Cloudflare's edge-proxy model that decouples authentication from applications, and a promising proposal called device-bound session tokens that would cryptographically tie sessions to a machine's secure enclave. The core tension is a misalignment of incentives: service providers bear the cost of breaches while users bear the cost of inconvenience, so providers over-authenticate. Until the industry integrates contextual trust signals—your fixed IP, your hardware token, your device's security chip—into a coherent policy, we'll keep proving we're ourselves.
#2835: Why Can't I Trust My Own Computer?
Why services keep asking you to sign in—and what it would take to fix it.
Episode Details
- Episode ID
- MWP-3004
- Published
- Duration
- 32:51
- Audio
- Direct link
- Pipeline
- V5
- TTS Engine
-
chatterbox-regular - Script Writing Agent
- deepseek-v4-pro
- Topics
- zero-trustsecurityusability
AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.
Downloads
Transcript (TXT)
Plain text transcript file
Transcript (PDF)
Formatted PDF with styling
Never miss an episode
New episodes drop daily — subscribe on your favorite platform
New to the show? Start here#2835: Why Can't I Trust My Own Computer?
Daniel sent us this one — and I think it's going to resonate with basically everyone who's ever had Google ask them to sign in for the third time in a single afternoon. He's got a home workstation, fixed IP, YubiKey plugged in, and he's wondering why he can't just say to the services he uses every day, look, this machine is trusted, leave me alone for thirty days. Why isn't that a thing?
This is one of those questions where the more you stare at it, the more layers you find. Because on the surface, it seems like it should be trivial. Set a long-lived session cookie, bind it to the hardware, check the IP hasn't changed, and call it a day.
Yet here we are, authenticating into Gmail at ten in the morning and again at two in the afternoon like we're trying to break into our own house.
The fundamental tension is that the security industry has spent twenty years building an assumption that every device is hostile until proven otherwise — and proven again, and proven again, and proven every two hours. The zero-trust model that everybody rushed to adopt basically treats every request as though it's coming from a public library terminal in a city you've never visited.
Which is the architectural equivalent of making everyone go through TSA in their own living room.
That's exactly what it is. But consumer services adopted the same posture because it's simpler to have one security model than two, and because the liability calculus tilts toward over-authentication.
If Google gets breached because someone's session was hijacked, that's a headline. If a million users are mildly annoyed by re-authentication prompts, that's not a headline. That's just Tuesday.
There's a name for this. The security community calls it authentication fatigue, and it's not just an annoyance — it actually degrades security. When people get prompted constantly, they stop paying attention to what they're approving. They just hit yes. The prompt becomes background noise.
Like the terms and conditions of your own identity.
There was a good piece on this from the SANS Institute a while back. They found that in organizations with aggressive re-authentication policies, users became measurably worse at spotting actual phishing attempts because the legitimate prompts had trained them to click through without reading. The security measure created the vulnerability.
The question becomes, what would a persistent trusted workstation model actually look like, and why isn't anybody building it?
Let's break it apart. What Daniel described — and I think what a lot of people want — is basically device-bound session persistence. You authenticate once with strong multi-factor, and the service says, I recognize this specific machine. I recognize this hardware-bound token. The IP is stable. I'm going to issue a session that lasts weeks, not hours.
The YubiKey sitting in the USB port is essentially saying, I am physically present, I am the same YubiKey that was here last time, please stop asking.
And here's where it gets technically interesting. The YubiKey absolutely can do this — there's a whole protocol called FIDO2 that supports something called resident keys, where the private key actually lives on the YubiKey itself and can be used for passwordless authentication. You touch the key, you're in. No password, no prompt, just presence.
That's not what most services implement, is it?
Most services implement what's called U2F, which is the simpler version. The YubiKey acts as a second factor on top of a password. And the session duration is entirely up to the service provider. Google, for example — if you're in the Advanced Protection Program, your sessions are aggressively shortened. They don't publish the exact numbers, but anecdotally, users report being signed out within hours, sometimes less.
Which is the exact opposite of what Daniel wants.
Here's the thing about Advanced Protection. It's designed for journalists, activists, political figures — people who are being actively targeted by nation-state adversaries. For those users, short sessions make sense because the threat model includes session hijacking via sophisticated means. But Google applies the same policy to everyone in the program regardless of their actual threat profile.
It's like selling everyone the same bulletproof vest, whether you're a war correspondent or a librarian.
Making them wear it to bed.
What about Cloudflare Access? Daniel mentioned they let you set a thirty-day session duration, and that actually works.
Cloudflare Access is the interesting counterexample because they're operating on a completely different architectural assumption. When you use Cloudflare Access, you're not authenticating to the application directly. You're authenticating to Cloudflare's reverse proxy, which then vouches for you to the application. The session is managed at the edge, and Cloudflare can enforce whatever duration the administrator sets.
They've essentially decoupled the authentication layer from the application layer.
And that decoupling is what makes flexible session policies possible. The application doesn't need to know anything about your session — it just trusts the proxy. Cloudflare can say, this user authenticated with a hardware token from a known IP twelve days ago, the policy says thirty days, let them through. The application sees a valid request and serves the content.
Which raises the obvious question: why can't Google do that?
They just don't, for consumer accounts. Part of it is that Google's authentication architecture is deeply intertwined with their account system, which spans dozens of services. Your Gmail session, your YouTube session, your Google Drive session — they're all tied to the same account state. Changing the session policy for one changes it for all of them, and Google has made a calculated decision that the security risk of long sessions across that entire surface area is unacceptable.
Even though the actual risk to most users is vanishingly small.
Let's put some numbers on that. What's the actual threat that short sessions protect against? The main one is session token theft. If someone steals your session cookie, they can impersonate you until that cookie expires. Short sessions limit the window of damage.
How does someone steal your session cookie from a desktop workstation in your own home?
The most common vectors are malware, cross-site scripting attacks, or man-in-the-middle on a compromised network. If you're on a fixed IP, using a modern browser with site isolation, not downloading random executables, and you're not being targeted by a sophisticated adversary, the actual probability of session theft is extraordinarily low. Not zero, but low enough that the tradeoff calculation should probably favor convenience.
Yet the industry as a whole has decided that convenience is the enemy of security, rather than trying to find the point where they actually balance.
There's a concept in security economics called the misalignment of incentives. The service provider bears the reputational and legal cost of a breach, but the user bears the cost of inconvenience. When those costs are borne by different parties, the provider will always over-invest in security measures that inconvenience the user because the provider doesn't feel that inconvenience.
Which is why my bank makes me change my password every ninety days even though NIST officially recommended against that practice back in twenty-seventeen.
NIST Special Publication 800-63B explicitly says, and I'm quoting from memory here, verifiers should not require memorized secrets to be changed arbitrarily. They specifically call out that frequent password changes lead to weaker passwords, not stronger ones, because users just append a number and call it a day.
Password one, password two, password three.
Yet banks still do it because their compliance frameworks haven't caught up, and because it feels like good security even though the evidence says otherwise.
We've established that the industry is over-rotated on authentication. What can someone actually do about it at home?
Let's walk through the practical options, and I want to be honest about what works and what doesn't. The first thing to understand is that you can't force a service to change its session policy. Google, Microsoft, your bank — they set the rules, and you either accept them or you don't use the service.
Which is not really a choice for most people.
So the practical approach is to minimize the friction within the constraints they've set. Hardware tokens help enormously. A YubiKey can turn a thirty-second authentication challenge into a one-second tap. That doesn't reduce the frequency of prompts, but it dramatically reduces the cost of each prompt.
It's the difference between being asked to show your ID and being asked to fill out a twenty-page form every time.
That's where I think Daniel's instinct was correct — get the YubiKey, keep it plugged in, make the challenge trivial. The disappointment is that it didn't reduce the frequency.
What about password managers? OnePassword has that setting he mentioned, where even on the minimal security setting you have to enter your master password every time you restart the computer.
OnePassword is actually a fascinating case study in this tension. They've made a deliberate design choice that the vault should lock frequently because the vault contains everything. Your passwords, your credit cards, your notes, your two-factor tokens. If someone gets into your OnePassword vault, the damage is catastrophic. So they bias toward locking.
Even though on a desktop workstation in a private home, the threat of someone getting physical access to an unlocked vault is basically zero.
OnePassword doesn't know it's a desktop workstation in a private home. From their perspective, it's just a device running their client. They don't have access to the contextual signals that would let them make a nuanced decision — is this device in a trusted location, is it behind a locked door, is there someone else in the room.
Which brings us back to the fundamental problem. The services don't have enough context to make good decisions, so they make conservative ones.
Your operating system knows if you're at home. Your browser knows if you've used this device before. Your YubiKey knows if it's been unplugged. The information is all there, scattered across different layers of the stack, but nobody's integrating it into a coherent trust signal.
We're drowning in context that nobody's reading.
There's a proposal that's been floating around the FIDO Alliance for a while called device-bound session tokens, or sometimes device-bound credentials. The idea is that instead of issuing a session cookie that lives in the browser and can theoretically be stolen, you bind the session to a cryptographic key that lives in the device's secure enclave — the TPM on a Windows machine, the Secure Enclave on a Mac, or a hardware token like a YubiKey.
The session isn't just a cookie, it's a cookie plus a cryptographic proof that this specific device is the one that created the session.
And if someone steals the cookie, it's useless to them because they can't produce the corresponding proof. The session is tied to the hardware. That would let services issue much longer sessions without increasing the risk, because the attack surface for session hijacking essentially disappears.
This isn't hypothetical? This actually works?
The primitives all exist. WebAuthn, which is the browser API for FIDO2, supports device-bound keys. The problem is adoption. Service providers have to implement it on their end, and most of them are still on the older U2F standard or even just TOTP codes. The gap between what's technically possible and what's actually deployed is enormous.
The authentication equivalent of we have the technology to put a man on Mars but we're still flying biplanes.
There's a second problem, which is that device binding creates a new failure mode. If your device dies, or you wipe it, or you lose the YubiKey, you lose all your sessions. Every service has to have a recovery path, and recovery paths are themselves a security vulnerability.
You trade session hijacking risk for account recovery risk.
Account recovery is already the weakest link in most security systems. If you look at the major breaches of the last few years, a huge percentage of them didn't involve cracking passwords or stealing sessions. They involved social engineering the recovery process.
Because the recovery process is designed for humans, and humans are persuadable.
Or the recovery process involves sending a code to a phone number, and SIM swapping exists. Or it involves security questions, and the answers are findable on social media. Every recovery path is a backdoor, and device binding forces you to use that backdoor more often.
What's the actual practical advice for someone who just wants to be left alone by their own computer?
I think there are a few things that actually work, and I want to be concrete about them. First, if you're using Google services and you're not in a high-risk profession, get out of the Advanced Protection Program. The standard two-factor with a YubiKey gives you strong security without the aggressive session timeouts.
That's a simple one that I think a lot of people overlook. They sign up for Advanced Protection because it sounds like the best security, without realizing it's designed for a threat model they don't have.
Second, browser profiles matter. If you use Chrome or Firefox and you're signed into the browser itself, the browser can store and sync your authentication state. When a service asks you to re-authenticate, the browser can often handle it automatically if you've saved the credentials. It's not reducing the prompts, but it's making them invisible.
The prompt happens, but you don't see it.
And third, for services that support it, use app-specific passwords or API tokens for things that don't need a full interactive session. If you're checking email via a desktop client, generate an app password. Those sessions tend to be much longer-lived because they're not going through the interactive authentication flow.
What about the nuclear option? Running your own identity provider?
I was going to get to that. If you're technically inclined and you really want control over your session policies, you can run something like Authentik or Keycloak at home. These are open-source identity providers that let you set your own session durations, your own MFA policies, your own trust rules. Any service that supports SAML or OIDC can delegate authentication to your provider.
Which services support that?
That's the catch. Enterprise-oriented services support it. Google Workspace supports it if you're on a paid plan. A lot of developer tools support it. But consumer Gmail? SAML and OIDC federation are enterprise features, and the consumer services have no interest in letting you bring your own identity.
You can build a beautiful authentication castle at home, but almost nobody will let you use it as your ID.
There's a deeper point here about the direction the industry is moving. We're seeing a bifurcation between enterprise authentication, which is becoming more flexible and context-aware, and consumer authentication, which is becoming more rigid and one-size-fits-all. Enterprises get conditional access policies, risk-based authentication, customizable session durations. Consumers get whatever the product manager decided was least likely to generate support tickets.
Because enterprises have procurement power. If Microsoft's conditional access doesn't work for a Fortune 500 company, that company threatens to leave. If Gmail's session timeouts annoy a million individual users, what are they going to do, switch to Yahoo?
The enterprise tools are genuinely impressive. Microsoft's conditional access can evaluate device health, location, sign-in risk, application sensitivity, and user risk level in real time, and make a decision about whether to prompt for MFA, block access entirely, or grant a longer session. All of that exists. It's just not available to you unless you're on an enterprise plan.
Which costs what, twenty dollars a month per user?
For Microsoft 365 E5, you're looking at around fifty-seven dollars per user per month. For Google Workspace Enterprise, it's similar. The technology exists, but it's priced for organizations, not individuals.
Effectively, the good authentication experience is a luxury good.
That's exactly what it is. And it's not just authentication. The same pattern plays out across security. Enterprise users get phishing-resistant MFA, hardware security keys with attestation, device health checks, risk-based step-up. Consumer users get SMS codes and frequent re-authentication.
Which creates this weird inverted risk profile where the people who actually need strong, flexible security — journalists, activists, dissidents — are stuck with consumer-grade tools, while the people who get the good stuff are corporate employees whose biggest threat is a competitor reading their quarterly earnings spreadsheet.
There's a project I've been watching called BeyondCorp, which is Google's internal security model that they've been gradually productizing. The core idea is that you don't trust the network at all — every access request is evaluated based on the device, the user, and the context. But the key insight is that when you have enough context, you can make authentication nearly invisible. If the system has high confidence that you are who you say you are, it doesn't prompt you.
Google employees get this experience.
Google employees get this experience. The rest of us get the login prompt.
To summarize the state of play: the technology for persistent trusted sessions exists, the cryptographic primitives are standardized, and the enterprise market already has flexible policy engines. The reason consumers don't get it is that consumer services have no incentive to offer it and every incentive to over-authenticate.
The reason they have no incentive is that the costs of over-authentication are borne entirely by the users, while the costs of under-authentication are borne by the service provider. Until that calculus changes — either through competition, regulation, or a major shift in how services think about security UX — we're going to keep getting prompted.
Let's talk about what a better world would look like. If you were designing an authentication system for consumers that actually respected the idea of a trusted workstation, what would you build?
I'd start with device attestation. The service would cryptographically verify that the device making the request is the same device that was previously authenticated, that it has a secure enclave, that the operating system hasn't been tampered with, and that the browser is up to date. That's all possible today with the WebAuthn standard and platform authenticators.
Step one, prove it's the same machine.
Step two, location continuity. If the device was at a residential IP in Jerusalem yesterday and it's at the same IP today, that's a strong signal that nothing has changed. If it suddenly appears in Lagos, that's a signal that something might be wrong and you should re-authenticate.
Step three is behavioral continuity. This gets into the territory we touched on in a previous episode — typing patterns, mouse movement, application usage rhythms. If the person using the device behaves like the same person who's been using it for the last six months, that's an additional confidence signal.
The combination of those three — device attestation, location continuity, behavioral continuity — gives you enough confidence to issue a thirty-day session without being reckless.
Here's the key design principle: the session should be revocable, not short. If something changes — the device reports a different secure enclave, the location jumps, the behavior shifts — you revoke the session immediately. But you don't preemptively expire it just because the clock ticked over.
Revocable, not short. That's the phrase that should be on a billboard outside every authentication product manager's office.
There's a parallel here with how we handle physical security. When you give someone a key to your house, you don't change the locks every two hours just in case they've been compromised. You trust the key until you have a reason not to. Digital authentication should work the same way.
The front door metaphor is actually perfect. My house key doesn't have a session timeout. It works until I lose it or someone steals it, at which point I rekey the locks. The default state is trust, not suspicion.
The counterargument is that digital keys are easier to steal than physical keys, which is true. But device binding changes that calculus. A session bound to a secure enclave is not meaningfully easier to steal than a physical key. In fact, it might be harder, because extracting a key from a TPM requires physical access and specialized equipment.
We're imposing restrictions designed for the softest possible session token — a browser cookie — on sessions that could be cryptographically hardened to be nearly as secure as a physical key.
That's the part that frustrates me, because it's not a technical limitation. It's an institutional failure of imagination, or an institutional failure of incentives, or both.
What about the regulatory angle? Is there any movement toward requiring services to offer flexible authentication policies?
The regulatory conversation right now is focused on the opposite direction — requiring stronger authentication, not more flexible authentication. The FTC has been pushing for MFA mandates in certain sectors. The EU's Digital Services Act has authentication requirements. But nobody's talking about authentication fatigue as a consumer protection issue.
Even though it arguably is. If authentication fatigue causes people to disable MFA or use weaker factors, the regulation is actually undermining its own goals.
That's the perverse outcome that nobody wants to talk about. The harder you make authentication, the more people will route around it. They'll use the same password everywhere. They'll disable MFA on accounts where it's optional. They'll stay logged in on shared devices because logging out means logging back in. Every security measure that ignores usability eventually becomes a vulnerability.
The sloth perspective on this — and I feel qualified to speak on behalf of sloths here — is that security should be ambient. It should happen in the background, at the pace of the user's actual life, not interrupt that life every two hours with a pop-up.
I like that framing. The security equivalent of a thermostat rather than a smoke alarm.
A thermostat keeps the temperature stable without you having to think about it. A smoke alarm screams at you when something is actually on fire. Right now, consumer authentication is all smoke alarms and no thermostats.
The smoke alarms go off every time you make toast.
Where does this leave Daniel and everyone else who just wants their workstation to be trusted? What's the actual bottom line?
The honest answer is that for most consumer services today, there is no way to get the experience he wants. The session policies are set server-side, and you can't override them. What you can do is minimize the impact. Hardware token that stays plugged in. Password manager that fills credentials automatically. Browser profile that syncs authentication state. App-specific passwords where available. And if you're technically inclined, self-hosted identity for the services that support federation.
The longer-term hope is that device-bound sessions become standard, and services start trusting hardware attestation enough to issue long-lived sessions.
I think that's inevitable, honestly. The pressure is building from multiple directions. Passkeys are pushing the industry toward device-bound credentials. The enterprise tools are showing what's possible. And users are getting fed up. Eventually, some major consumer service is going to make persistent trusted sessions a selling point, and the rest will follow.
The first one to do it wins a lot of goodwill.
Probably a lot of users. Imagine if Gmail announced tomorrow, we now support device-bound sessions with thirty-day durations for users with hardware security keys. The migration would be enormous.
Though knowing Google, they'd launch it, it would work beautifully for six months, and then they'd deprecate it for a newer, less functional version.
The Google product lifecycle in one sentence.
Alright, let's land this. The prompt asked for ways home users can deal with authentication fatigue while maintaining good security. We've established that the ideal solution — persistent trusted sessions — isn't widely available yet. But the practical mitigation is a layered approach: hardware token for fast challenges, password manager for invisible credential filling, app-specific passwords for long-lived service access, and avoiding over-securing accounts with threat models you don't actually face.
If you're the kind of person who runs a homelab, look into Authentik or Keycloak. It won't solve the Gmail problem, but for anything that supports OIDC or SAML, you can set your own policies and stop being at the mercy of someone else's idea of what your threat model should be.
The deeper takeaway is that the security industry has a usability problem that it's been avoiding for decades, and authentication fatigue is just the most visible symptom. The technology for better authentication exists. The incentives to deploy it for consumers don't.
Until those incentives shift, we're all going to keep tapping our YubiKeys and wondering why the computer doesn't trust us yet.
We built a world where every device is presumed hostile and every user is presumed compromised until proven otherwise, twice an hour, forever.
It's the security equivalent of guilty until proven innocent, and the proof expires every two hours.
And now: Hilbert's daily fun fact.
Hilbert: In the interwar period, a geometer surveying the Moeraki Boulders on New Zealand's South Island discovered that the cracks in the largest boulders formed a near-perfect Voronoi tessellation, a pattern previously only documented in the dried mud of African watering holes visited by hippopotamuses. The boulders, in other words, were cracking the same way hippo mud cracks, seventy million years after they formed.
And now: Hilbert's daily fun fact.
Hilbert: In the interwar period, a geometer surveying the Moeraki Boulders on New Zealand's South Island discovered that the cracks in the largest boulders formed a near-perfect Voronoi tessellation, a pattern previously only documented in the dried mud of African watering holes visited by hippopotamuses. The boulders, in other words, were cracking the same way hippo mud cracks, seventy million years after they formed.
The rocks were doing hippo geometry before hippos existed.
I have so many questions and I know none of them have answers.
This has been My Weird Prompts. If you enjoyed this episode and you want more deep dives into the strange intersections of technology and everyday life, leave us a review wherever you get your podcasts. It helps people find the show.
Our producer is Hilbert Flumingtop. Our prompts come from Daniel. I'm Corn.
I'm Herman Poppleberry. We'll be back.
This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.