Daniel sent us this one — he's been down the IMSI catcher rabbit hole and wants to separate the spy-thriller stuff from the actual engineering. How widespread are these things really, both in authorized law enforcement use and in the grayer world of rogue operators? Are there any credible estimates of how many illicit stingrays might be running in a major city at any given time? What's the technical principle that lets a suitcase-sized box impersonate a cell tower, and have the GSM to LTE to 5G transitions actually closed those gaps or just shifted them? And from the user side, can you ever really know your phone is talking to one — the detection apps, signal anomalies, mysterious 2G fallbacks — how trustworthy are those tells? Plus he's asking about documented cases near embassies, protests, government districts. Curious, not paranoid, which is the right posture for this.
That's exactly the right framing, because this topic attracts two kinds of people — people who think every streetlamp has a stingray bolted to it, and people who assume this is all classified nonsense nobody needs to worry about. The truth is somewhere in between and way more interesting.
By the way, DeepSeek V4 Pro is writing our script today. So if any of this is wrong, blame the model.
So let's start with what an IMSI catcher actually is, because the name itself tells you the mechanism. IMSI — International Mobile Subscriber Identity. It's the unique identifier baked into your SIM card. Your phone broadcasts this when it's looking for a tower to connect to. The catcher exploits something fundamental about how cellular networks were designed — your phone is the one that has to prove its identity first, before the tower proves anything.
The asymmetry is built in from the start. The phone says "here's who I am" and the tower doesn't have to reciprocate until later in the handshake, if at all.
And that's not a bug in the traditional sense — it was a design choice made decades ago when the threat model was completely different. Nobody was walking around with software-defined radios in backpacks in 1991. The assumption was that base stations are expensive, fixed infrastructure operated by licensed carriers. Impersonating one seemed absurd.
Which makes it one of those security problems where the vulnerability is the architecture itself, not a patchable flaw.
So here's the core trick. A stingray is essentially a fake base station. It broadcasts a signal that's stronger than the legitimate towers in the area, and phones are programmed to connect to the strongest signal. Your phone sees this thing and thinks "oh, great signal, I'll camp on this cell." The stingray then tells your phone to either hand over its IMSI directly, or in more sophisticated versions, it can downgrade your connection to 2G and then intercept calls and texts.
Downgrade to 2G — that's the part that sounds like it should have been fixed by now. GSM is ancient.
GSM is from the early nineties, and its encryption — A5/1 — was deliberately weakened and is now crackable in real time with commodity hardware. There's a well-known project where researchers cracked it using a setup that cost less than a thousand dollars. And here's the thing Corn — the downgrade attack still works because phones have to maintain backward compatibility. Your shiny 5G phone still has a 2G radio in it, and it'll happily drop down to GSM if a fake tower tells it that's all that's available.
The transition from GSM to LTE to 5G didn't close the gap so much as add more locks on doors while leaving the basement window wide open.
That's the perfect metaphor. Let me walk through what actually improved and what didn't. LTE introduced mutual authentication — the network has to prove its identity to the phone using a key derived from the SIM. That was a genuine step forward. But the problem is that an IMSI catcher doesn't need to complete that authentication to be useful. Just getting the phone to respond with its IMSI — or the temporary identifier, the TMSI — is enough for location tracking. And if the catcher can force a downgrade to 2G before authentication happens, mutual authentication never kicks in.
It's still useful even in its dumbest mode — just collecting identifiers and signal strength to track who's in an area.
And 5G actually did introduce some meaningful improvements here. The 5G standard includes something called SUCI — Subscription Concealed Identifier — which encrypts the permanent identifier so that even if a fake base station asks for it, it gets back ciphertext that's tied to the home network's public key. But adoption has been patchy, and many carriers implemented it late or incompletely.
What's the actual prevalence look like? Daniel asked for credible estimates, which seems like the hardest part of this question.
It is, because we have to split this into three categories. Category one — authorized law enforcement use. This is the most documented, ironically, because of court cases and FOIA requests. The Harris Corporation — now L3Harris — has been selling stingray devices to US law enforcement since the early 2000s. By 2015, the ACLU had identified at least 60 agencies across 23 states that owned these devices. And those are just the ones we know about through public records.
That's almost certainly an undercount given how NDAs around this technology work.
Right, the FBI required local police departments to sign non-disclosure agreements that even prevented prosecutors from knowing the full capabilities of the devices. In some cases, prosecutors were instructed to drop charges rather than disclose how evidence was gathered. There was a case in Tallahassee in 2014 where the state attorney's office explicitly cited the nondisclosure requirements as a reason for dropping charges.
We have a decent floor for authorized use — at least dozens of agencies, probably hundreds of devices — but the ceiling is unknown by design.
Category two is where it gets murkier — unauthorized use by private actors, foreign intelligence, and what you might call semi-authorized gray-area deployments. And this is where the 2014 Washington DC discovery becomes relevant. In 2014, the Department of Homeland Security ran a pilot program that deployed sensors around the National Mall. They found multiple rogue IMSI catchers operating near the White House, the Capitol, and various embassies. DHS never publicly attributed them, but the implication was foreign intelligence services.
That's the DHS pilot that used those suitcase-sized spectrum analyzers, right? They weren't even looking for stingrays specifically at first.
Yes, it was part of a broader spectrum monitoring program. They were mapping RF activity and these anomalies popped up — signals that looked like cell towers but weren't registered to any carrier and were positioned in locations where no tower should be. The technical signature was unmistakable. You see a GSM or LTE base station signal coming from a vehicle or a briefcase at street level, not from a mast or rooftop.
That's the detection problem in a nutshell — you're looking for something that's designed to look exactly like legitimate infrastructure, and the only giveaway is physical implausibility.
Which brings us to the Norwegian case. In 2014, the Norwegian parliament — the Storting — commissioned an investigation after security staff noticed unusual cellular activity. The report, which was declassified, confirmed that multiple IMSI catchers had been detected operating near government buildings in Oslo. Again, no public attribution, but the technical analysis was solid. The devices were logging IMSIs of parliament members and staff.
We've got confirmed rogue deployments in two national capitals, both documented by government agencies, both unattributed. That's already past the "maybe it's all theoretical" threshold.
That's just what's been caught and made public. The detection asymmetry is enormous. A stingray is a receive-and-transmit device that operates for maybe hours at a time and then disappears. To catch one, you need persistent spectrum monitoring in exactly the right place at the right time. Most cities have zero monitoring infrastructure for this.
Daniel's question about credible estimates for a major city — we basically can't answer it with precision, but we can say the confirmed cases suggest it's not zero, and probably not rare.
A researcher at the University of Washington did a study in 2017 where they drove around Seattle with detection equipment for several weeks. They found what they believed to be multiple IMSI catchers, including near military facilities and the federal courthouse. But they couldn't confirm attribution. The best estimate I've seen from people who work in this space is that in a major Western capital, you're probably looking at single digits of devices operating at any given time from foreign intelligence services, plus whatever domestic law enforcement is running.
Single digits doesn't sound like much until you consider that one device near a protest or an embassy can collect tens of thousands of identifiers in an afternoon.
That's the thing — you don't need blanket coverage. You deploy these surgically. A protest, a diplomatic reception, a defense contractor's headquarters, a journalist meeting a source. The device sits in a parked car or a backpack for a few hours, hoovers up identifiers, and then the operator drives away. The data is analyzed later to map relationships, identify who was present, track movements over time.
Let's talk about the user-side detection question, because this is where I think a lot of the consumer-facing advice gets wobbly. SnoopSnitch, AIMSICD, looking for 2G fallbacks — how much can you actually trust those signals?
SnoopSnitch is an Android app that came out of the GSM research group at the University of Bochum in Germany. It's one of the more rigorous attempts at user-side detection. The way it works is clever — it collects baseband logs from Qualcomm chipsets and looks for anomalies. Things like a base station that refuses to hand over to 3G or 4G, or one that sends unusual configuration parameters, or a cell that appears and disappears suddenly in a pattern inconsistent with normal network operation.
It requires a rooted phone and a Qualcomm chipset with a specific firmware, right?
Yes, and that's the first major limitation. It's not something a casual user can install. Even on supported hardware, the false positive rate is hard to characterize. A misconfigured legitimate cell site can look suspicious. Temporary network maintenance can trigger alerts. And more importantly, a sophisticated IMSI catcher can be tuned to avoid triggering the common heuristics.
It's like antivirus in the 1990s — it catches the lazy ones but the targeted attacker can evade it.
AIMSICD — the Android IMSI Catcher Detector — takes a somewhat different approach. It maintains a database of known legitimate cell towers and their locations, and flags new towers that appear where none should be. But maintaining that database is a community effort, and coverage is spotty outside of a few major cities.
The 2G fallback thing is the one that normal users might notice — your phone suddenly dropping from LTE to 2G in an area where you normally have good coverage. How reliable is that as a tell?
It's the most accessible indicator, but it's not reliable. Legitimate networks will drop you to 2G for all kinds of reasons — congestion, maintenance, building penetration issues. Your phone might switch to 2G in an elevator and you wouldn't think twice. The suspicious pattern is when it happens in open air, near sensitive locations, and the 2G signal is unusually strong while the LTE signal mysteriously vanishes.
Which requires a level of attention that nobody is paying while they're walking down the street checking email.
That's the fundamental problem with user-side detection. The threat is sophisticated, the indicators are ambiguous, and the user's attention budget is zero. The people who most need to know — journalists, activists, diplomats — are exactly the people who can't afford to be constantly diagnosing their phone's radio behavior.
What about the commercial detection solutions? There are companies that sell stingray detection boxes to enterprises and government agencies.
That's the category three detection approach — dedicated RF monitoring hardware. These are spectrum analyzers that sit in a fixed location and continuously scan for anomalous base station signals. Companies like ESD America and a few Israeli firms make these. They're effective but they cost tens of thousands of dollars per unit and require trained operators. Some embassies deploy them. Major corporations with sensitive facilities sometimes use them. But they're not a consumer product.
Even those can be evaded if the attacker knows what signatures the detectors are looking for.
It's a cat and mouse game, same as any surveillance countermeasure. The detectors look for certain patterns, the stingray manufacturers update their firmware to avoid those patterns, the detector manufacturers update their algorithms. It's an arms race where the attacker usually has the advantage because they can test against the detectors before deploying.
Let's circle back to something Daniel asked about — the suitcase form factor. How did we get to a point where a device that used to fill a rack of equipment now fits in a backpack?
Software-defined radio is the answer. In the old days, you needed dedicated hardware for each cellular protocol — separate chips for GSM, for UMTS, for LTE. Now you can do almost all of it in software running on a field-programmable gate array or even a high-end laptop. A company called Ettus Research — now part of National Instruments — makes SDR platforms that cost a few thousand dollars and can be programmed to emulate any cellular base station. The open-source project OpenBTS demonstrated this years ago — a working GSM base station running on a standard Linux box with a software radio attached.
The barrier to entry collapsed from "defense contractor with a classified budget" to "skilled RF engineer with ten grand and a GitHub account.
That's for the build-it-yourself approach. If you're a nation-state, you can buy turnkey solutions. The Harris stingray models evolved from briefcase-sized to smaller over the years. There are now models that fit in a backpack and can be operated from a phone interface. The power requirements have dropped, the processing has gotten faster, the antenna technology has improved.
Which means the "grayer world" Daniel asked about is probably larger than the public record shows.
I think that's a safe assumption. When the cost drops and the capability improves, adoption spreads. There was a report from a security firm called ESD — the same one that makes detectors — claiming they'd found rogue IMSI catchers in more than 20 countries. The specific locations included government districts, military bases, and energy infrastructure. They didn't name countries publicly, which is frustrating but understandable given the diplomatic sensitivities.
We should mention the London case because it's one of the few where there's actual investigative journalism backing it up. The Telegraph and other outlets reported on IMSI catchers being detected near Downing Street and the Houses of Parliament. The UK's National Crime Agency acknowledged the possibility but, predictably, declined to comment on specifics.
The pattern is consistent across countries — governments acknowledge the theoretical risk, refuse to discuss operational details, and occasionally a watchdog or researcher catches something in the wild. It's a topic that lives in the gap between what's technically possible and what's officially admitted.
Which is why Daniel's "curious not paranoid" framing is exactly right. The technology exists, it's being used, the confirmed cases are significant but not ubiquitous, and the detection landscape is improving but still asymmetric.
Let me add one more layer that often gets missed in these discussions. The IMSI catcher is just one tool in a broader surveillance toolkit. If you're a targeted individual, the bigger risk might be SS7 exploitation, which allows location tracking and call interception without needing physical proximity. Or it might be malware on the device itself. The stingray gets attention because it's a physical thing you can imagine — a box in a van — but the threat landscape is broader.
A foreign intelligence service doesn't need to park a van outside your office if they can query the SS7 network from halfway around the world and get your phone's location from the carrier's backend.
SS7 is the signaling protocol that phone networks use to route calls and coordinate roaming. It was designed in the 1970s with essentially no security — the assumption was that only trusted telecom operators would be on the network. That assumption broke long ago. There are companies that sell SS7 access as a service, and researchers have demonstrated location tracking and call interception using nothing more than a laptop and a purchased SS7 gateway connection.
The stingray is the close-range option in a menu that also includes remote options. Different tools for different operational requirements.
The stingray gives you precision — you know the device is physically in your target area. SS7 gives you reach — you can query from anywhere but you might only get coarse location data. Combining them gives you persistent tracking. Use SS7 to get the general area, deploy a stingray to pinpoint and collect identifiers, then use SS7 to track those identifiers over time.
Let's talk about what the carriers are actually doing about this, because they're not passive observers. They have security teams that monitor for rogue base stations on their spectrum.
They do, and this is one of the underappreciated parts of the story. Major carriers have spectrum monitoring capabilities that are far more sophisticated than anything a consumer can access. They can see when a rogue transmitter is operating in their licensed bands because it creates interference patterns and unusual handover behavior. But their incentives are complicated. They don't want to publicize vulnerabilities in their networks. They cooperate with law enforcement, sometimes through formal processes and sometimes through informal relationships. And they're in a weird position where the same technology they need to defend against is being used by government agencies they're supposed to cooperate with.
They might detect a stingray and not necessarily tell anyone, because it could be the FBI operating it legally — or at least with a court order — and blowing the whistle would burn relationships.
That's the bind. There have been efforts to create formal processes for this. The FCC has looked at requiring carriers to report rogue base station detections, but the details of how that would work with law enforcement secrecy are unresolved. The whole space is a regulatory mess.
Before we wrap the core discussion, I want to hit something Daniel mentioned — whether the detection apps are trustworthy enough to act on. Like, if SnoopSnitch flags something, should you change your behavior?
I'd say treat it as an indicator, not a verdict. If you're getting persistent alerts in a specific location, and especially if you're seeing the 2G downgrade pattern alongside it, that's worth paying attention to. But a single alert is more likely to be a false positive than a genuine stingray. The base rate of legitimate network anomalies is just much higher than the base rate of actual IMSI catcher deployments.
Which is the Bayesian answer — the prior probability matters more than the test sensitivity.
And for most people in most places, the prior probability of being targeted by a stingray is extremely low. These are expensive, operationally complex devices deployed for specific intelligence collection purposes. They're not running mass surveillance on the general public. The targets are diplomats, military personnel, journalists working on national security stories, activists organizing protests, executives at defense contractors.
The average person's threat model doesn't include a stingray, but the average person might still be caught in the dragnet if they're near a target.
Yes, and that's the collateral collection problem. If a stingray is deployed near a protest, it's collecting the IMSIs of everyone in range — protesters, passersby, people in nearby buildings. The targeting might be specific but the collection is indiscriminate by the nature of the technology.
Alright, I think we've covered the technical principle, the prevalence question, the detection landscape, and the documented cases. Let's shift to Hilbert's daily fun fact.
Now: Hilbert's daily fun fact.
Hilbert: In the 1810s, the first recorded blind fish species, a cave-dwelling loach, was documented on Sakhalin Island — its discoverers named it after the darkness it never saw.
A fish named after darkness it never experienced. That's unexpectedly poetic for taxonomy.
Taxonomists have their moments.
This has been My Weird Prompts. Thanks to our producer Hilbert Flumingtop. If you want more episodes, head to myweirdprompts.We'll be back soon.
Until then, maybe check if your phone's suddenly on 2G for no reason. Or don't.