From Manifestos to Mechanisms: Will the Pro‑Human AI Push Change Anything?

March 9, 2026
5 min read
Illustration of a human figure facing a glowing AI brain made of circuitry

From Manifestos to Mechanisms: Will the Pro‑Human AI Push Change Anything?

Washington just had its first real public fight over who controls frontier AI – and into that vacuum drops a detailed, almost constitutional roadmap for how AI should be governed. The Pro‑Human Declaration, backed by an unusually broad political coalition, demands an explicit choice: AI that amplifies people, or AI that quietly sidelines them. On paper, it is the clearest American attempt so far to sketch rules for powerful models. The real question is whether this manifesto can turn into actual institutions and enforcement – and what that means in a world where Brussels is already writing the rulebook.

The news in brief

According to TechCrunch, a group of scientists, former officials and public figures has published the Pro‑Human Declaration, a framework for what they call responsible AI development. The effort is co‑organized by MIT physicist and AI researcher Max Tegmark and has attracted several hundred signatories, including prominent figures from both US political camps.

The document argues that humanity is at a decisive turning point and outlines five pillars for AI policy: humans must remain in control; power should not concentrate in a few hands; core aspects of human experience must be protected; individual freedoms preserved; and AI companies held legally liable for harms. Among the bolder proposals are a temporary ban on pursuing superintelligent systems until there is scientific consensus on safety and democratic approval; mandatory off‑switches for powerful models; and bans on architectures that can self‑replicate, self‑improve autonomously or resist shutdown.

The roadmap surfaced just as the US Pentagon labelled Anthropic a supply‑chain risk after the company declined to give the military unrestricted access, while OpenAI quickly agreed a separate deal with the Department of Defense, TechCrunch notes.

Why this matters

The declaration is not law, and Congress remains stuck. Yet it marks a political line in the sand in three important ways.

First, it reframes AI safety as a mainstream voter issue rather than a niche concern of researchers and effective altruists. Tegmark points to polling that nearly all Americans oppose an unregulated race toward superintelligence. Whether that number is perfectly accurate matters less than the direction: fear of uncontrolled AI has become a legitimate campaign topic. That changes incentives for both parties, especially in an election cycle.

Second, it centers power rather than only risk. The Pentagon–Anthropic confrontation and the parallel OpenAI–DoD deal expose a basic fact: in the US, a handful of private labs now sit at the chokepoint between national security, economic competitiveness and information flows. The declaration’s insistence on avoiding excessive concentration is therefore not academic. It is a direct response to the spectacle of one CEO effectively negotiating the limits of AI in warfare.

Third, it normalises ideas that recently sounded radical. A moratorium on superintelligence work, mandatory off‑switches, bans on self‑replicating systems – these are the AI equivalent of nuclear non‑proliferation norms. They are almost impossible to negotiate at a global level today, but every treaty in history started life as someone’s manifesto.

Who benefits? Smaller players and civil society, if this shifts the Overton window toward stronger oversight and liability. Who loses? Big labs that have been operating under a de facto “ask forgiveness, not permission” regime. The immediate implication is pressure for an FDA‑style model for AI: pre‑market testing, risk evaluation and the possibility of blocking deployment – starting, very strategically, with children’s products.

The bigger picture

The declaration sits at the intersection of three ongoing trends.

First, the slow death of self‑regulation. Frontier AI companies spent 2023–2025 promising voluntary safety commitments and red‑team exercises. The Anthropic case shows the limits of that approach: as soon as commercial and strategic interests collide, the state will intervene, but without a clear framework it does so ad hoc. That is dangerous for everyone, including the companies themselves.

Second, the global shift from abstract principles to hard law. The EU has passed the AI Act, the UK has launched an AI Safety Institute, and the US administration has issued an AI Executive Order pushing agencies toward tighter oversight. The Pro‑Human Declaration is more detailed than the typical NGO statement, but less operational than the EU text. It is best understood as a bridge: turning high‑level worries about extinction or mass manipulation into concrete regulatory levers like testing requirements, liability standards and design bans.

Third, the return of “big tech as critical infrastructure”. When the US Secretary of Defense labels a leading lab a supply‑chain risk for refusing broader military use, AI ceases to be just another digital product. It becomes part of the national arsenal, much like satellite networks or chip foundries. That has two consequences: security hawks gain more influence over AI policy, and corporate autonomy shrinks. The declaration is, in part, an attempt to insert democratic guardrails before the security logic fully takes over.

Compared with competitors, US policy is still improvisational. China follows a more state‑centric, censor‑first model. Europe codifies rights and obligations upfront. Silicon Valley has historically preferred market‑driven innovation. The manifesto is a signal that this laissez‑faire era is ending, at least for the most capable systems.

The European angle

From a European perspective, the document is oddly familiar. Its core demands – human oversight, bans on certain high‑risk architectures, liability for developers – echo principles already embedded in the EU AI Act, the Digital Services Act and long‑standing GDPR doctrine about accountability and data protection.

Where Brussels is ahead is institutional design. The EU is creating an AI Office, national market‑surveillance authorities and an explicit risk‑tier system. The US has, so far, opposed a single federal AI regulator. The Pro‑Human Declaration’s analogy to an FDA for AI pushes in the opposite direction: one empowered body that can block or condition releases. If Washington eventually creates such an agency, it will bring US practice closer to EU‑style ex‑ante control.

For European users and companies, there are two immediate implications.

If the US hardens its stance on frontier AI, European regulators will have an easier time enforcing strict rules without fear of complete transatlantic divergence. Shared baselines on testing and safety for large models would reduce compliance costs for startups that operate on both sides of the Atlantic.

But there is also a risk: if US policy focuses narrowly on superintelligence and national security use‑cases, while Europe remains occupied with general‑purpose AI and workplace impacts, we may end up with a fragmented regime where no one comprehensively addresses labour, competition and cultural questions. European firms could find themselves squeezed between US security‑driven controls and EU fundamental‑rights obligations.

Looking ahead

In the next 12–24 months, the declaration’s direct legal impact will likely be modest. It is not a bill, and Congress is unlikely to pass sweeping AI legislation in an election year.

Its real influence will be indirect. Expect three developments:

  1. Child‑safety rules as a wedge. Pre‑deployment testing for chatbots aimed at minors – for mental‑health harms, grooming risks and manipulation – is politically hard to oppose. Once such testing regimes exist for one segment, extending them to other sensitive contexts (healthcare, finance, biolab assistance) becomes easier.

  2. Procurement as de facto regulation. The Pentagon, intelligence community and large federal agencies will start inserting safety, logging and controllability clauses into AI contracts. This creates a shadow regulatory framework: if you want US government money, you follow these rules. Anthropic’s standoff is only the first of many such negotiations.

  3. Soft norms for superintelligence work. A legally binding moratorium on superintelligence research is unlikely. But we may see clubs of labs and states agreeing on informal red lines: no self‑replicating agents, no models trained to autonomously acquire cyber‑weapons, mandatory shutdown mechanisms. These norms often solidify into later treaties.

Investors and founders should watch two indicators: whether the US coalesces around an FDA‑style authority, and how far child‑safety rules expand into general AI testing obligations. For European readers, the key question is whether Brussels and Washington can align enough to avoid a splintered standards landscape that only the very largest players can navigate.

The bottom line

The Pro‑Human Declaration will not, by itself, stop a reckless race toward ever more powerful AI. But it crystallises a shift: frontier AI is no longer seen as a neutral technology, but as a political project that must answer to democratic constraints. If the US can turn this manifesto into concrete mechanisms – and coordinate with Europe’s more mature regulatory agenda – we may yet get AI that amplifies human agency instead of eroding it. The alternative is to let a handful of companies and defence bureaucracies decide on our behalf.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.