Intro: A surveillance company explains how the world should work
When a data analytics giant that powers immigration raids and battlefield software publishes a 22‑point political credo, it’s not just another blog post. Palantir’s newly promoted “summary” of CEO Alex Karp’s book is effectively a manifesto on AI weapons, democracy, culture and power — issued not by a think tank, but by a listed software vendor whose revenue depends on governments.
This piece looks at what the statement really signals: how Palantir wants to reshape the terms of debate around AI, war and inclusivity, what it means for the wider tech industry, and why Europeans in particular should read it as more than eccentric corporate branding.
The news in brief
According to TechCrunch, Palantir has published a 22‑point document that it describes as a short summary of “The Technological Republic”, a book by CEO Alex Karp and corporate affairs head Nicholas Zamiska. The book, released in 2025, is pitched as the theoretical foundation for Palantir’s work, though critics have dismissed it as dressed‑up sales collateral.
The new post lays out Palantir’s worldview: that Silicon Valley owes a moral debt to the United States, that free consumer services are not a sufficient contribution, and that a new deterrence order based on AI is replacing the nuclear age. It argues that the key question on AI weapons is not whether they will be built, but who builds them and why.
The text also criticises what it portrays as performative pluralism and inclusivity, suggests post‑war demilitarisation has weakened Germany and Japan, and positions Palantir firmly on the side of a militant defence of “the West”. Eliot Higgins of investigative outlet Bellingcat publicly commented that the document is not neutral philosophy but the declared ideology of a company selling software to defence, intelligence, immigration and police agencies.
Why this matters: ideology as part of the product
Palantir has long insisted that it is just a software provider. This manifesto makes that stance untenable. When a company denounces “hollow pluralism” and questions inclusivity while championing AI weapons and stronger militaries, it is telling customers, employees and regulators that buying its products also means buying into a worldview.
Who benefits?
- National‑security hawks in Washington, London and some European capitals gain an articulate corporate ally arguing that any hesitation on AI weapons is naïve and dangerous.
- Palantir’s own positioning as the ideological champion of “the West” may help it win defence tenders against more cautious rivals that prefer low‑key sales decks to civilisational manifestos.
Who loses?
- Communities already over‑policed or over‑surveilled — migrants, minorities, political activists — see a dominant vendor doubling down on a worldview that prioritises security over rights and treats criticism as decadence.
- Employees and recruits who care about inclusivity and pluralism now face a starker choice: work for a company that openly disparages those values, or go elsewhere.
The immediate implication is cultural: Palantir is dragging the tech industry’s latent culture war into the open. For years, defence‑tech founders have grumbled that “woke” Silicon Valley refused to build for the military. Palantir’s post turns that into doctrine: if you are not helping arm “the West” with AI, you are helping its enemies.
That has competitive consequences. The company is no longer just bidding on contracts; it is lobbying to redefine the moral ground on which those contracts are awarded. If the framing sticks, rivals that emphasise ethics, human rights or strict usage limits may be portrayed as unserious — even when their caution aligns better with democratic oversight.
The bigger picture: the normalisation of ideological defence tech
Palantir’s statement doesn’t exist in a vacuum. It sits on top of three converging trends.
1. The rise of “mission‑driven” defence startups.
Companies like Anduril in the U.S. have grown quickly by pitching themselves as unapologetic builders of autonomous defence systems, often using language about defending Western civilisation. Palantir’s manifesto is that logic on steroids, but now from a more mature, publicly traded firm with deep government penetration.
2. The AI‑militarisation wave.
Russia’s invasion of Ukraine, escalating tensions in the Indo‑Pacific, and advances in autonomous targeting have turned AI for defence from a taboo into a budget priority. Where Google employees once forced the company to back away from the Pentagon’s Project Maven, the political wind has shifted. NATO now openly talks about AI as key to its future advantage. Palantir is trying to freeze that momentum into a binary: build lethal AI with us, or accept technological defeat.
3. Tech companies as political actors.
Big platforms already shape public debate through recommendation algorithms and content rules. What’s new here is a software infrastructure vendor embracing an overtly ideological identity, including commentary on the “mistakes” of post‑war Germany and Japan. Historically, defence contractors preferred to keep such arguments inside think tanks and classified briefings. Putting them on a public blog signals confidence that customers and investors will reward, not punish, the stance.
Compared with cloud giants like Microsoft, which talks about “responsible AI” even as it pursues defence work, Palantir is choosing confrontation over ambiguity. The message is aimed not just at governments but at the tech sector itself: the era of quiet, deniable militarisation is over; pick a side.
The European angle: when your data platform lectures Germany on rearmament
For Europe, the manifesto hits several sensitive nerves.
First, Palantir already has a growing footprint across the continent, from health‑data projects in the U.K. to work with police and border agencies in various EU states, as reported by multiple investigative outlets over the past years. When such a supplier argues that Germany’s post‑war restraint went too far and that Japanese‑style pacifism is a strategic threat, it’s not just commentary — it is a vendor advocating for a more militarised posture in markets where it wants to sell AI tools.
Second, the EU is in the final stretch of implementing the AI Act, which imposes strict rules on “high‑risk” systems, including those used in law enforcement and migration control. Defence is partly exempt, but many of the contexts in which Palantir operates in Europe — policing, borders, critical infrastructure — are not. Brussels has also armed itself with the Digital Services Act and Digital Markets Act, and of course GDPR, all of which embed pluralism, transparency and fundamental rights into the legal fabric.
A company that publicly disparages inclusivity and “vacant pluralism” is, whether it likes it or not, challenging the normative foundations of that regulatory regime.
Finally, there is culture. Germany’s deeply embedded post‑war pacifism, Japan’s constitutional constraints, and Europe’s caution on automated warfare are not mere “overcorrections”; they are responses to historical catastrophe. For EU publics — especially in Germany — seeing a U.S. surveillance vendor declare those settlements a mistake will confirm fears that foreign tech contractors do not share Europe’s constitutional memory.
For European buyers, the question becomes: do you want the nervous system of your state — from hospitals to railways to border control — operated by a company that sees local values as evidence of decline?
Looking ahead: polarisation as a business strategy
Palantir’s move is risky but calculated.
In the short term, it will likely deepen its appeal among defence ministries, intelligence agencies and politicians who already see the world through a civilisation‑struggle lens. Clear ideological signalling can be a powerful differentiator in procurement battles, especially when the competition is a more neutral‑sounding cloud provider.
But the medium‑term risks are serious:
- Regulatory scrutiny. European supervisory authorities, data‑protection agencies and AI regulators will read this document. It provides context when assessing proportionality, necessity and fundamental‑rights impact: the company is not ideologically agnostic.
- Talent pipeline. In a labour market where many AI researchers and engineers are uneasy about military applications, Palantir is narrowing its recruitment pool to those comfortable with its doctrine. That may be a feature, not a bug, but it is a constraint.
- Customer backlash. Civil agencies — health systems, municipalities, social‑services departments — may reconsider whether they want to be associated with a brand increasingly defined by war rhetoric and attacks on inclusivity.
Expect three things over the next 12–24 months:
- More public positioning by other defence‑tech players, either echoing Palantir’s line or deliberately contrasting with it.
- Stronger demands from civil‑society groups and some legislators in the EU and U.K. to audit and possibly limit Palantir deployments in domestic governance.
- Internal debates within governments about whether ideological alignment should play any role in vendor selection, beyond the usual security and compliance checks.
The biggest unanswered question is whether Palantir’s bet on polarisation will normalise this tone across the sector — or convince buyers that core state infrastructure should be provided by companies that talk less and document more.
The bottom line
Palantir’s mini‑manifesto is not just marketing fluff; it is a declaration that the company sees itself as a political and civilisational actor, not merely a contractor. By attacking inclusivity and downplaying pluralism while pushing AI militarisation, it sharpens existing divides inside the tech world and between Silicon Valley and Europe. The key question for readers — especially those in governments and large enterprises — is simple: are you comfortable giving critical data and decision‑making infrastructure to a vendor that openly treats your democratic values as signs of decay?



