Here's what Daniel wrote to us this week. He says: "We discussed shadow lobbying and how ostensibly independent think tanks can actually be vehicles for political influence when their funding is opaque. We see significant examples of this in the context of think tanks taking stands on the Middle East conflict, both on the pro-Israel camp, which I sit among, and on the other side. In Israel, there has been some push toward recognizing the real dynamics here, in which NGOs have to disclose their funding. NGO Monitor, in particular, has worked to uncover remarkable ties between seemingly random think tanks and funding sources that often tie back to proscribed terrorist organizations. Clearly, not all think tanks and research organizations have an undisclosed agenda or shadowy funding ties. How do we evaluate the credibility of think tanks from the outside when their research is often compelling, but the mechanics that enable their operation and the bodies that pay their checks are often not easily discovered?" That's a sharp question, and honestly one of the more practically useful ones we've gotten in a while.
It really is. And I want to start with something that I think gets underappreciated in almost every discussion about this topic. The problem isn't that think tanks have funders. Every institution has funders. Universities have funders. Newspapers have funders. The problem is a very specific structural dynamic where the funding relationship is designed to be invisible, and where the invisibility itself is the product. That's what makes this different from ordinary institutional bias.
Right, and I think the word "designed" is doing a lot of work there. Because there's a spectrum, isn't there? On one end you have an institution that just happens to have biases baked in from its founding donors, and on the other end you have something that is actively architecting opacity to serve a specific foreign government or political movement. Those are very different animals.
They are, and the architecture of that opacity is genuinely sophisticated. Let me walk through how the money actually moves, because once you see the plumbing, you can't unsee it. A foreign government, or a government-linked foundation, wants to influence U.S. policy on, say, Middle East reconstruction or Iran sanctions. They don't write a check directly to the Brookings Institution with a memo line that says "please write favorably about our interests." What they do is route funds through a donor-advised fund, or through a foundation registered in a low-disclosure jurisdiction, often Cyprus or Luxembourg, and that entity makes a grant to an American nonprofit that then makes a grant to the think tank. By the time the money lands, it's been laundered through two or three institutional handshakes, and the paper trail is genuinely difficult to follow.
And at each step in that chain, every individual institution can plausibly claim it's just receiving a donation from a legitimate nonprofit. Nobody in the chain is technically lying.
Which is precisely the design. The Quincy Institute's Think Tank Funding Tracker, which released updated data in 2025, found that the top seventy-five U.S. foreign policy think tanks received over twenty-five million dollars from foreign governments and seven million from Pentagon contractors in a single year. And that's just what they could trace. The number that actually flows through intermediaries is almost certainly higher, because by definition the well-hidden stuff doesn't show up in those figures.
Twenty-five million dollars. And that's the traceable portion. I feel like we should just sit with that number for a second.
It's not a rounding error. And in the Middle East context specifically, the tracker highlights how funding from the UAE, Qatar, and Saudi Arabia flows into major institutions. The Brookings-Qatar relationship has been particularly scrutinized. U.S. senators have pushed the Department of Justice to investigate whether Brookings was in compliance with the Foreign Agents Registration Act, which is the primary U.S. transparency statute, because the argument is that the Qatari funding wasn't really for independent research. It was for policy influence. And Brookings has pushed back on that characterization, but the structural question remains.
How much money are we talking about in the Brookings-Qatar case specifically? Because I think the scale matters for understanding why it attracted that level of scrutiny.
The figure that's been reported is in the range of fourteen to fifteen million dollars over several years, which is substantial even for an institution of Brookings's size. And what made it particularly notable wasn't just the amount but the timing. Several of the grants coincided with periods when Brookings scholars were publishing and testifying on topics directly relevant to Qatari strategic interests, including Gulf regional security and U.S.-Qatar diplomatic relations. Again, Brookings disputes the causal link. But when you have that much money flowing in from a foreign government and the research output happens to align with that government's preferred policy positions, the structural question doesn't go away just because the institution says it's independent.
Let's talk about the FARA angle, because this is where it gets legally interesting. My understanding is that think tanks have a fairly comfortable exemption from FARA registration. Is that right?
The academic exemption is the relevant one. FARA requires entities acting as agents of foreign principals to register and disclose. But if you can characterize your activity as academic or educational, you can often sidestep the registration requirement entirely. And think tanks are very good at presenting themselves as academic institutions. They have fellows with PhDs, they publish in journals, they testify before Congress as independent experts. The entire institutional apparatus is designed to signal academic legitimacy, which then doubles as a legal shield against disclosure requirements.
Academic laundering. That's the phrase Daniel's framing is pointing at, and it's a good one. You take what is essentially paid advocacy and you run it through the prestige machine of a think tank fellowship, and it comes out the other side looking like independent scholarship.
And here's the thing that makes it so effective: the research often is genuinely rigorous. That's the insidious part. It's not that the scholars are writing bad papers. Many of them are producing intellectually serious work. The funding relationship operates at a higher level of abstraction. It shapes which questions get asked, which datasets get funded, which policy scenarios get modeled. Nobody has to tell a researcher to reach a particular conclusion. You just fund the research program that is structurally likely to produce the conclusions you want.
That's a much more sophisticated form of influence than most people imagine. They picture a donor calling up a think tank director and saying "write me a paper saying X." But what you're describing is more like... you fund the entire research agenda on a topic, and the conclusions follow naturally from the framing of the questions.
There's a paper in the Journal of International Relations from 2023 that makes exactly this point. The authors found that when they controlled for the stated ideological mission of a think tank, donor composition still predicted research output at a statistically significant level. In other words, even after accounting for the fact that, say, a defense-focused think tank is going to produce defense-focused research, the specific donor mix still shifted the findings in predictable directions. That's not just institutional culture. That's the money talking.
Can you give a concrete example of what that looks like in practice? Because I think the mechanism is a little abstract until you see it play out.
Sure. Think about how a think tank decides what its annual research agenda looks like. A program director sits down and says, we're going to commission six papers this year on energy security in the Gulf. Now, who decides which six questions those papers ask? That decision is shaped by what funders have expressed interest in, what past grant applications have been successful, which visiting fellows have been funded by which foundations. Nobody writes a memo saying "please reach conclusion Y." But if your Gulf energy security program is substantially funded by a Gulf state sovereign wealth fund, the questions that get asked are the ones that are interesting to that funder. The questions that might embarrass that funder simply don't make it onto the agenda. And the scholars doing the research may be completely unaware of why their research program is shaped the way it is. They just know what they've been asked to work on.
That's actually a useful illustration, because it shows how you can have a building full of intellectually honest people producing research that is nonetheless systematically skewed, without any individual person having made a corrupt decision.
The corruption, if you want to call it that, is in the institutional architecture, not in any individual's conduct. Which is part of why it's so hard to address.
Okay, so let's bring this into the Middle East context specifically, because that's where Daniel's question is really grounded. And I want to acknowledge upfront that Daniel is explicit that he sits in the pro-Israel camp, which is also where we sit. But he's also asking a question that applies symmetrically. The opacity problem exists on all sides of this conflict.
It does, and I think it's worth being precise about both sides, because the mechanisms are actually quite different even if the underlying dynamic is similar. On one side, you have what NGO Monitor has documented extensively, which is the flow of European government funding, through foundations in Norway, Switzerland, the EU, into Palestinian NGOs and then into the policy research ecosystem. NGO Monitor maintains a consolidated Palestinian NGO funding database that tracks thousands of grants. Their 2024 and 2025 updates identified at least sixty NGO officials with direct documented ties to the Popular Front for the Liberation of Palestine, which is a designated terrorist organization in the U.S., the EU, and Israel.
Sixty officials. That's not a handful of edge cases. That's a pattern.
It's a network. And what makes it relevant to the think tank discussion is that these NGOs don't just operate on the ground. They produce reports, they brief journalists, they testify at international bodies, and their research gets cited by Western think tanks who have no idea, or claim to have no idea, about the organizational genealogy of the data they're using. Germany suspended funding to several of these NGOs in 2024 after the Frankfurter Allgemeine reported on evidence of aid diversion and terror links. That's a significant development. Germany is not exactly a reflexively pro-Israel actor in European politics.
And I think the citation chain is worth dwelling on for a moment, because that's where the contamination actually spreads. It's not just that a compromised NGO produces a report. It's that a Western academic cites that report, and then a think tank fellow cites the academic paper, and by the time it reaches a congressional briefing or a newspaper op-ed, the original source has been laundered through three or four layers of apparently respectable institutions.
That's exactly right. There's a term in intelligence analysis for this: source laundering. The original provenance gets obscured by successive citations until the claim appears to rest on a broad consensus when it actually rests on a single compromised source that has been repeatedly self-cited through a network of affiliated institutions. It's the informational equivalent of the funding laundering we described earlier. Same architecture, different commodity.
By the way, today's episode is being written by Claude Sonnet 4.6, our friendly AI collaborator. Just worth noting.
Always worth noting. Now, on the other side of the ledger, and this is where Daniel's self-awareness is actually quite useful, you have Gulf state funding flowing into institutions that produce research favorable to Qatari, Emirati, or Saudi strategic interests. The Middle East Policy Council's 2024 report on Gaza reconstruction is a documented example. That report was funded by a Qatari foundation through a Luxembourg-based intermediary, and its policy recommendations aligned quite neatly with Qatar's preferred reconstruction framework, which happens to preserve Hamas's administrative role in Gaza. The report itself never disclosed that funding chain.
And if you were a congressional staffer or a journalist reading that report, you would have no reason to question it. It's published by a reputable institution, it has footnotes, it has a rigorous-looking methodology section.
That's the credibility arbitrage. The think tank's institutional reputation becomes a kind of collateral that foreign funders are essentially borrowing against. The funder gets the credibility of the institution without having to disclose their involvement. The institution gets the funding it needs to operate. And the reader gets a paper that looks independent but isn't.
Now I want to push on something here, because there's a counterargument that says disclosure is the answer. Just make everyone disclose their funding and the market for ideas will sort it out. Why isn't that sufficient?
A few reasons. First, disclosure is only as useful as people's willingness to follow up on it. Most journalists and policymakers who cite think tank research don't check the funding disclosures even when they exist. There's a cognitive shortcut where institutional prestige substitutes for source verification. Second, even when you have disclosure, the intermediary structure means that what gets disclosed is often one layer of the funding chain, not the ultimate source. You might see that a think tank received a grant from the Horizon Foundation, but you don't know that the Horizon Foundation is itself funded by a Gulf sovereign wealth fund. Third, and this is the deeper problem, disclosure doesn't address the question-framing issue I mentioned earlier. Knowing who funded a study doesn't tell you which studies weren't funded, which questions weren't asked, which research programs were quietly defunded when they produced inconvenient results.
That third one is the one that keeps me up at night, if sloths were prone to insomnia, which we are not. The null set of research is invisible by definition.
There's actually a name for this in the academic literature. It's called the file drawer problem, and it's well-documented in pharmaceutical research, but it applies just as directly to policy research. The studies that don't support the funder's preferred conclusion don't get published. They go in the file drawer. So the published literature ends up being systematically skewed even if every individual published paper is technically accurate.
The pharmaceutical parallel is actually pretty striking. There were cases where drug companies funded dozens of clinical trials on the same compound, published the three that showed positive results, and buried the rest. The published evidence base looked like a consensus when it was actually a curated selection. You're saying the same thing happens with policy research.
The structure is identical. And in pharma, it took mandatory clinical trial registration, where you have to pre-register a trial before you run it so there's a public record that it existed, to start addressing the problem. There's no equivalent mechanism in policy research. Nobody has to register a think tank study before they conduct it. So the file drawer remains invisible.
Let's talk about the Israeli NGO transparency law, because Daniel mentions it and I think it's a genuinely interesting case study in the politics of disclosure. The law requires NGOs that receive more than fifty percent of their funding from foreign government entities to disclose that in all official communications. And the reaction to that law tells you a lot about how contested this whole space is.
It's a fascinating case. The law passed in 2016, and the criticism from the left was that it was selectively targeting left-wing NGOs that receive European government funding, because right-wing NGOs tend to be funded by private donors rather than foreign governments, so they're not caught by the same threshold. And there's something to that critique as a matter of political targeting. But the underlying principle, that organizations influencing Israeli domestic policy should disclose when they're funded by foreign governments, is one that's hard to argue against on democratic grounds. The controversy reveals something important: transparency is universally praised in the abstract and bitterly contested in every specific application.
Because everyone supports transparency for the other side's funding and considers their own funding to be obviously legitimate and therefore not needing disclosure.
That's the pattern without exception. And it maps onto the U.S. context. The same people who want to investigate Brookings's Qatar funding are often quite comfortable with defense contractor funding of pro-military think tanks. And vice versa. The selective application of transparency demands is itself a form of information warfare.
Which brings us to what I think is the most practically useful part of this conversation, which is: how do you actually evaluate a think tank from the outside? Because most of our listeners are not investigative journalists with time to trace funding flows through Cyprus and Luxembourg. They're reading a policy paper on their lunch break and they need a framework.
There are a few tools and methods that are genuinely useful. The most rigorous one is the Transparify rating system, which is a project that rates think tanks on a one-to-five-star scale based on their funding transparency. A five-star rating means the institution publishes detailed, searchable donor information with grant amounts and purposes. A one-star or no-star rating means they disclose essentially nothing. When Transparify published its most recent comprehensive rating, a significant number of major U.S. foreign policy think tanks rated one star or below. That alone is a useful filter.
So if a think tank can't even hit a two-star rating on a voluntary transparency scale, that's a meaningful signal about how seriously they take disclosure.
It's a necessary but not sufficient condition. A five-star rating means they're telling you who funds them. It doesn't mean the funding isn't influencing the research. But it at least gives you the information to assess. The second tool is the IRS Form 990, which is a public document that all American nonprofits file annually. It lists the organization's major donors, though often not the specific amounts or purposes of individual gifts. It's imperfect, but it's freely searchable through databases like ProPublica's Nonprofit Explorer, and it's often the first place investigative journalists start.
And the 990 is something anyone can access. You don't need a subscription or a press credential.
Right, it's genuinely public. The third method is what I'd call the follow-the-footnote approach. You look at a think tank's last ten or fifteen major policy papers and you ask: do the recommendations consistently align with the strategic interests of a specific government or industry? One paper that favors Qatar's position on Gaza might be a coincidence. Five papers in two years, all of which happen to align with Qatari strategic interests, is a pattern. And patterns are often more informative than any single disclosure document.
That's the kind of thing that requires a bit of time but not any specialized access. You're just reading their output and asking a basic question: who benefits from this research?
And the answer is often not subtle. The Heritage Foundation and the Center for American Progress are a useful comparison here, because they represent different ends of the transparency spectrum. Heritage publishes its donor list with relatively detailed information, including major donors and their affiliations. It's openly conservative, openly funded by conservative donors, and it doesn't pretend otherwise. The Center for American Progress is more opaque about its specific donor mix, and when its funding has been traced, it includes significant contributions from institutions and governments that benefit from the policy positions CAP advocates. Neither institution is unbiased. But Heritage's transparency model at least lets you calibrate.
There's something almost refreshing about an institution that says "yes, we have a point of view, here's who funds us, now engage with the arguments." That's more honest than claiming false independence.
The rise of what some analysts are calling action tanks is relevant here. These are institutions that openly acknowledge their goal is policy change, not just policy analysis. The Quincy Institute is actually an example of this. They're quite transparent about wanting a less interventionist U.S. foreign policy. They publish their donor list. You can disagree with their conclusions, but the epistemological situation is cleaner. You know what you're reading and why it exists. The more dangerous institutions are the ones that claim the mantle of pure objectivity while operating as undisclosed advocates.
And I'd actually add that the action tank model has a long and somewhat respectable pedigree. The Fabian Society in Britain was openly committed to gradualist socialism and said so. The American Enterprise Institute has never hidden its market-oriented commitments. The problem isn't having a viewpoint. The problem is pretending you don't have one while the funding flows from entities that very much do.
That's a useful distinction. Declared bias is navigable. Undeclared bias is what creates the epistemic hazard.
Let's talk about the proposed Foreign Funding Disclosure Act, because Daniel's background notes mention it and it seems like the legislative response to some of what we've been discussing.
It was introduced in Congress in March 2025, and it would close the academic exemption in FARA that we discussed earlier. Under the proposed legislation, think tanks and research organizations that receive funding from foreign governments or foreign-government-linked entities would be required to register and disclose, regardless of whether they characterize their activity as academic. The think tank lobby has pushed back hard on this, arguing that it would chill legitimate academic exchange and that the compliance burden would disadvantage smaller institutions. Those are not entirely bad-faith arguments. But the counterargument is that the existing exemption has been systematically exploited, and the evidence for that exploitation is documented in the Quincy tracker data and in cases like the Brookings-Qatar relationship.
It's worth noting that the opposition to that bill is itself a useful data point. Institutions that have nothing to hide from disclosure requirements tend not to spend significant resources lobbying against disclosure requirements.
That's a fairly reliable heuristic. The Carnegie Endowment situation is another case worth mentioning. A 2025 report documented that a Carnegie Endowment publication on Iran nuclear negotiations was funded in part by a European government that was actively trying to influence U.S. policy on the Iran deal. The Carnegie paper recommended a U.S. posture that aligned with that European government's negotiating interests. Now, Carnegie is a prestigious institution with serious scholars. The paper may well have been intellectually defensible on its own terms. But the funding relationship was not disclosed, and the policy recommendations happened to serve the funder's strategic agenda. That's the credibility arbitrage in action at one of the most respected institutions in the field.
And this is why the "the research is rigorous" defense doesn't fully work. Rigor in methodology doesn't immunize against bias in question selection or agenda framing.
There's a concept in research methodology called construct validity that's relevant here. A study can be internally valid, meaning the methodology is sound, the statistics are correct, the conclusions follow from the data, but if the construct being measured was chosen because it was likely to produce a particular result, the whole enterprise is compromised. You can run a perfectly rigorous regression on a dataset that was curated to exclude inconvenient observations. The rigor of the method doesn't save you from the bias in the setup.
So what should listeners actually do with all of this? Because I want to make sure we end up somewhere practical rather than just leaving everyone with a vague sense of suspicion about everything they read.
The first thing is to make funding provenance a default part of how you read policy research, not an afterthought. Before you share a think tank report or cite it in an argument, spend three minutes on their website looking for a donor page. If there isn't one, that's already a signal. If there is one, check whether the donors have obvious strategic interests in the topic of the report.
Three minutes. That's genuinely achievable.
The second thing is to use the tools that exist. Transparify ratings are publicly available. The Quincy tracker data is published. ProPublica's 990 database is free. NGO Monitor's funding database is searchable. These aren't obscure resources. They just require the habit of using them. The third thing is to apply the follow-the-footnote discipline to the institutions you rely on most. Pick two or three think tanks whose work you cite regularly and actually trace their funding. You might find nothing concerning. Or you might find something that changes how you read their future output. Either way, you're better informed.
And I'd add a fourth thing, which is to be honest about applying this framework symmetrically. If you're tracking the funding of think tanks whose conclusions you disagree with, you should be equally rigorous about the ones whose conclusions you like. Confirmation bias in source evaluation is probably the most common failure mode here.
That's the hardest one, and it's the most important one. The institutions that are most likely to shape your views without your awareness are the ones whose conclusions feel obviously correct to you, because you're not applying the same critical scrutiny you'd apply to a source you already distrust.
Daniel is sitting in the pro-Israel camp, as he says. And I think his question implicitly acknowledges that the transparency problem isn't unique to one side. NGO Monitor does important work exposing funding flows that connect NGOs to proscribed organizations. But the Quincy tracker is doing structurally similar work on the other side of the ledger. The intellectual honesty is in using both.
And in recognizing that the existence of bad actors in a space doesn't mean every institution in that space is compromised. Most think tanks are run by people who genuinely believe in their mission and are trying to produce honest research. The problem is structural, not conspiratorial. Funding relationships create pressures that shape institutional culture over time, often without anyone making an explicit decision to compromise their integrity. The scholars at these institutions are often the last to know that their research agenda has been quietly shaped by donor preferences at the board level.
Which is almost more depressing than the conspiracy version.
It is, because it means you can't solve it just by finding the bad guys. You have to change the structural incentives, which is what legislation like the Foreign Funding Disclosure Act is trying to do.
Before we close, I want to flag one forward-looking question that I think deserves more attention than it currently gets. We've been talking about human researchers and human funders. But the think tank space is about to be disrupted by AI-generated research at scale. If you can produce a technically rigorous-looking policy paper with a language model in a few hours, the economics of influence operations change dramatically. The cost of producing credible-seeming research drops toward zero.
This is something I've been thinking about a lot. The current detection mechanisms, tracing funding flows, checking IRS filings, looking at board affiliations, all of those assume that the institution producing the research has a real institutional existence with real financial relationships. But if you can generate a hundred policy papers from a shell organization with no real staff and minimal funding, the paper trail that investigators currently rely on largely disappears. The arms race between influence operations and transparency tools is going to get significantly more intense.
And the credibility arbitrage gets weirder, because you can potentially generate research that mimics the citation patterns and methodological conventions of legitimate scholarship without any of the underlying institutional relationships. There's no funding chain to trace because there's barely an institution. It's a think tank as a purely aesthetic object.
The one thing that might provide some resistance to that is that policymakers and journalists still tend to weight research from known institutions more heavily than research from unknown ones. Brand reputation is a somewhat durable asset. But that also creates an incentive to compromise existing reputable institutions rather than build new ones, which is arguably what we're already seeing with the funding relationships we've been discussing. Why build a new think tank when you can quietly capture an existing one?
That reframes the whole problem. The opacity isn't just about hiding money. It's about borrowing legitimacy that took decades to build.
And that legitimacy is genuinely hard to replicate from scratch. Which is why the deepfake policy paper threat and the captured institution threat are actually complementary strategies. You use AI-generated research to flood the zone with volume, and you use captured institutions to provide the credibility anchor that makes the volume seem worth engaging with.
That's a genuinely unsettling place to leave things, and also exactly the right place to leave things, because it means this problem is going to get more important, not less.
The framework we've described, check the 990, use Transparify, follow the footnotes, apply it symmetrically, those habits are going to matter more as the information environment gets noisier. Building them now is not a bad investment.
Alright. Big thanks as always to our producer Hilbert Flumingtop for keeping this whole operation running. And thank you to Modal for the GPU credits that power the pipeline behind this show. This has been My Weird Prompts. If you want to subscribe or find the RSS feed, head to myweirdprompts dot com. Take care of yourselves.
See you next time.