1. Headline & Intro
OpenAI’s new GPT‑5.5 Cyber model won’t be landing in your average security stack anytime soon. After publicly mocking Anthropic for tightly controlling access to its Mythos cybersecurity assistant, Sam Altman is now doing almost exactly the same with OpenAI’s own offensive‑security tool. Beyond the Twitter irony, this is a pivotal moment: AI is crossing a line from “helpful assistant” into “strategic cyber weapon.” In this piece we’ll look at what OpenAI is actually doing, who gains and who is sidelined, and why this could harden—not democratise—the global security landscape.
2. The news in brief
According to TechCrunch, OpenAI is beginning a limited roll‑out of “GPT‑5.5 Cyber,” a specialised model for cybersecurity tasks such as penetration testing, vulnerability discovery and exploitation, and malware reverse‑engineering. Sam Altman said on X that Cyber will first go to “critical cyber defenders” in the coming days.
Access won’t be open: applicants must fill out a form on OpenAI’s site describing their credentials and intended use. The company says it is working with the U.S. government to decide who qualifies and how to expand access over time.
The move mirrors Anthropic’s earlier decision to restrict its Mythos cyber tool to select partners, a strategy Altman previously criticised as fear‑driven marketing. That criticism now looks awkward as OpenAI adopts a nearly identical gatekeeping model for its competing product.
3. Why this matters
Cyber is not just “ChatGPT for security folks.” It’s a dual‑use system that can accelerate both defence and offence. Automating recon, exploit development and malware analysis doesn’t just help blue teams; it also massively lowers the skill and time needed for sophisticated attacks.
In that light, OpenAI’s restricted access is understandable. Handing advanced offensive tooling to anyone with a credit card would be reckless. But who gets in is where this becomes geopolitically significant.
The winners are obvious: large, well‑connected organisations—especially in the U.S.—with established relationships to OpenAI, major cloud providers and government agencies. They’ll gain access to capabilities that could dramatically reduce detection and response times, improve red‑teaming, and stress‑test infrastructure at scale.
The losers are smaller defenders: regional SOCs, SMEs, critical‑but‑not‑glamorous infrastructure operators, and under‑resourced public institutions outside the U.S. These players already struggle to hire talent; being excluded from cutting‑edge AI tools risks widening the gap between the security “haves” and “have‑nots.”
The other clear loser is trust. OpenAI spent years selling a narrative of broad access and AI as a public good, while criticising rivals for gatekeeping. Pivoting to a de facto closed, government‑aligned cyber capability without much transparency reinforces the view that safety policy is also a market‑positioning tool.
4. The bigger picture
Cyber fits a broader pattern: the most powerful AI systems are increasingly being treated like controlled technologies rather than generic software.
Anthropic’s Mythos was an early test case. Google has been working on security‑focused models (like its Sec‑PaLM efforts) for internal and partner use, not for general release. Microsoft is wiring OpenAI‑powered Security Copilot into its own stack, again primarily for enterprise customers. None of these tools look anything like the “open” in OpenAI’s original name.
Historically, we’ve seen similar dynamics around cryptography and zero‑day exploits. Capabilities that materially shift offensive power tend to concentrate in states, defence contractors and a small club of large vendors. The public justification is always the same: if we don’t control this, adversaries will. That argument is not wrong—but it is self‑serving.
What’s new is the pace and scale. A single frontier‑scale model can embody expertise that previously sat across thousands of security engineers. Once trained, it can be copied and leaked with a single breach. Anthropic has already reportedly seen unauthorised access to Mythos, despite its tight controls. It is naïve to assume Cyber won’t eventually leak or be replicated by open‑source communities.
So we are heading towards an AI‑fuelled cyber arms race with three tiers: state‑aligned proprietary tools like Cyber, enterprise‑oriented assistants from big vendors, and a shadow ecosystem of open‑source and leaked models that everyone else will rely on.
5. The European / regional angle
For Europe, Cyber highlights a strategic vulnerability: the most advanced offensive and defensive AI tools are being developed and controlled by U.S. companies, in coordination with U.S. regulators.
EU organisations already hesitate to run sensitive security workloads through U.S. clouds because of GDPR, Schrems II and data‑sovereignty concerns. Now we add a new layer: even if a European CERT or energy operator wants to use Cyber, they may simply not qualify under OpenAI’s criteria, or they may be uncomfortable with a U.S. private company—and government—effectively vetting their security posture.
This strengthens the argument for European AI stacks that include security‑specialised models from players like Aleph Alpha, Mistral, DeepL or regional cloud providers. Under the upcoming EU AI Act, high‑impact systems in critical infrastructure will face strict obligations around transparency, risk management and robustness. That framework could be used to demand clearer disclosure from vendors like OpenAI—or to justify keeping such tools out of regulated environments entirely.
For smaller EU members, including those in Central and Eastern Europe, the risk is a two‑speed security Europe: big telcos and banks that can buy into U.S. ecosystems, and everyone else scrambling with weaker local tools while facing the same Russian‑linked and criminal threats.
6. Looking ahead
Expect Cyber’s access model to evolve into something that looks a lot like “KYC for AI.” Background checks, organisational vetting, logs, maybe even binding usage policies enforced at the API level. Over time, this will likely become an industry template for dual‑use models in areas like biosecurity and autonomous systems.
Three things to watch:
- Who counts as a “critical defender”: If the bar is “Fortune 500 and government agencies,” the backlash from mid‑market security vendors and regional SOCs will be loud.
- Regulatory response: The U.S. is already discussing AI safety standards for critical infrastructure. The EU AI Act and NIS2 could push for either mandatory risk assessments for these tools or explicit restrictions.
- Leakage and replication: As with Mythos, any serious breach or model exfiltration will blow up the argument that tight vendor control meaningfully contains offensive risk.
Commercially, OpenAI is signalling that the most valuable capabilities of GPT‑5.5 will live behind bespoke, high‑margin offerings rather than in the general ChatGPT product. That nudges the AI industry further towards a split between consumer‑grade assistants and “classified” enterprise models that the average user will never touch.
7. The bottom line
Restricting Cyber is probably the least bad option given how dangerous offensive‑security models can be—but OpenAI’s about‑face exposes the gap between its rhetoric and its incentives. Power is consolidating in a small club of U.S. vendors whose tools will shape the security posture of entire countries. The open question for policymakers and CISOs is simple: do we really want the next generation of cyber capabilities to live inside private, effectively extraterritorial black boxes—and if not, what are we prepared to build instead?



