1. Headline & intro
OpenAI didn’t just add another subscription tier this week; it quietly redrew the map for AI-assisted coding. The new $100/month ChatGPT Pro plan sits between the familiar $20 Plus and the rarely discussed $200 Pro, and it’s explicitly aimed at developers who live inside ChatGPT’s Codex coding tools. This is not about features, it’s about capacity — how much intense coding you can push through the model before you hit a wall. In this piece, we’ll look at what actually changed, why OpenAI is targeting this price point, how it pressures Anthropic and others, and what it means for European teams deciding which AI tools to standardise on.
2. The news in brief
According to TechCrunch, OpenAI has introduced a new $100/month “Pro” subscription for ChatGPT, focused on heavier use of its Codex-based coding assistant. Until now, users could choose between a free ad-supported tier, an $8/month Go tier (also with ads), a $20/month Plus plan without ads, and a $200/month Pro offering.
OpenAI’s public pricing page now highlights the new $100 Pro plan alongside Plus, while the $200 tier is no longer prominently listed, though OpenAI told TechCrunch it remains available. Both Plus and the two Pro tiers share the same core ChatGPT features; the real differentiator is how much Codex usage is allowed.
The $100 Pro plan offers roughly five times the Codex capacity of Plus, while the $200 plan provides about twenty times Plus’s limits. OpenAI is also temporarily boosting Codex limits for $100 subscribers until May 31, giving early adopters even more headroom. The company told TechCrunch that over 3 million people are now using Codex weekly, a figure that has multiplied in recent months.
3. Why this matters
This move matters less as a pricing tweak and more as a declaration: AI-assisted coding is now a billable infrastructure line item, not a side perk. The $20 Plus plan was fine for occasional scripting or refactoring, but anyone running full workdays inside ChatGPT quickly ran into rate limits. Those users previously had to jump straight to $200/month — a psychological and budget leap that many freelancers, small agencies and startup teams couldn’t justify.
The $100 Pro tier neatly fills that gap. It’s calibrated for exactly the type of user OpenAI calls out: developers in “high‑intensity work sessions” who need sustained throughput, not just fancy new features. In other words, OpenAI is segmenting the market between casual chat users and people for whom Codex is effectively a core part of their IDE.
Winners are clear: solo devs, small consultancies and internal tooling teams that were bumping against Plus limits but didn’t have enterprise budgets. They now get predictable, substantially higher capacity without doubling into the $200 bracket. OpenAI also positions itself more directly against Anthropic, which has long had a $100/month option for Claude’s coding capabilities. By claiming “more coding capacity per dollar,” OpenAI is signalling the start of a capacity war rather than a pure model-quality war.
The losers? First, Anthropic and other code-assistant vendors, which now have a harder time justifying similar or higher prices without clear productivity gains. Second, users who stayed on free or low-cost tiers now see the direction of travel: more ads, more nudging toward paid, and more of the best coding experience locked behind higher caps. AI that once felt “magically unlimited” is becoming transparently metered.
4. The bigger picture
This $100 tier plugs into a broader industry pattern: the cloud‑ification of AI tooling. For years, consumer AI products were sold like apps — one subscription, fuzzy “fair use” assumptions. Now we are moving toward something that looks much more like AWS: headline plans wrapped around hard rate limits, with “burst” capacity and premium tiers for heavy workloads.
OpenAI is not alone. GitHub Copilot, Replit’s Ghostwriter, Claude Code and a wave of AI-native IDEs are converging on the same basic model: you pay either per seat or per usage band for code generation, refactoring and explanation features. What matters is not just how smart the model is, but how much you can lean on it during a crunch week without being throttled.
Historically, we’ve seen this movie before. Cloud storage, compute and even CI/CD pipelines all started with relatively flat pricing, only to settle into stratified tiers based on capacity and concurrency. AI coding tools are following the same arc, just faster. The difference is that developers are now deeply dependent on these tools; for many junior devs, “no Copilot/ChatGPT today” is as disruptive as “no Stack Overflow” used to be.
Against this backdrop, OpenAI’s explicit comparison with Anthropic is revealing. It suggests the company sees the near-term battlefield not in general‑purpose chatbots but in verticalised assistants: coding, office productivity, research. Capacity per euro (or dollar) within these workflows becomes a key differentiator. If Codex can reliably deliver more tokens of useful code per subscription than Claude Code, that’s an easy spreadsheet decision for a CTO.
We should also expect this to cascade into product design. Once pricing is tied to “how much coding you can do,” tools will push toward features that compress work into fewer interactions: better context management, smarter refactors, and more persistent understanding of a codebase. Efficiency becomes part of the sales pitch.
5. The European/regional angle
For European developers and companies, the new tier hits a very practical nerve: budgeting. Many EU startups and SMEs already pay for GitHub, cloud infrastructure and collaboration suites in dollars or euros; adding another €90–€110 per power user for ChatGPT Pro forces real prioritisation. In a typical European dev team where salary costs are lower than in Silicon Valley, the ROI calculation for a $100/month tool is more sensitive.
At the same time, the EU regulatory environment adds extra layers. Under GDPR, companies need to think carefully about what code and data they send to US‑hosted AI services, especially if repositories include personal data or business secrets. The upcoming EU AI Act will introduce transparency and risk-management obligations for “high‑risk” uses of AI. While a coding assistant is low-risk compared with, say, a medical diagnosis system, its logs can still expose sensitive information.
This creates an interesting dynamic: legally cautious European corporates may restrict Codex access to a smaller pool of senior developers, making a $100 tier more palatable for those few, while everyone else uses lower tiers or on‑prem alternatives. Meanwhile, European model providers like Mistral or Aleph Alpha will point at this pricing as proof that there is room for regionally hosted, privacy‑optimised coding assistants — possibly with EU‑only data residency.
Currency and tax also matter. In the eurozone and across the UK, Switzerland and the Nordics, exchange rates and VAT treatment can make the true cost of a “$100” US subscription noticeably higher. For a startup in Berlin, Ljubljana, Madrid, Zagreb or Warsaw deciding whether every senior engineer needs this level of capacity, that price friction is non-trivial.
6. Looking ahead
Expect this tier not to be the final word but the midpoint in a more granular pricing ladder. If adoption is strong, OpenAI is likely to introduce team and organisation plans where $100-equivalent capacity can be pooled or shared, combined with admin controls, usage analytics and perhaps limited on‑prem or VPC options for sensitive codebases.
On the competitive front, Anthropic almost has to respond — either by increasing Claude Code capacity at $100, adding differentiated features (e.g. deeper repo understanding, stronger security guarantees), or experimenting with regional pricing. GitHub Copilot, which is deeply integrated into the IDE and already entrenched in many teams, may lean harder on that integration advantage and less on a raw capacity arms race.
Two other questions to watch:
- Will we see per‑project or seasonal pricing? Many consultancies and agencies in Europe operate in bursts. A flexible “high-capacity month” add‑on could fit real‑world usage better than a constant $100.
- How aggressively will OpenAI enforce limits after May 31? The temporary boosted caps are a classic growth tactic. Once users design workflows around that freedom, pulling it back too far risks backlash.
Over the next 12–18 months, the most important shift won’t be a dramatic new model demo; it will be the mundane normalisation of AI capacity planning. Engineering managers will talk about “Codex budget” the way they talk about “CI minutes” or “GPU hours” today. The tools that make that budgeting predictable and fair will win.
7. The bottom line
OpenAI’s $100 ChatGPT Pro plan is less a discount and more an admission that AI coding capacity is now core developer infrastructure. It gives serious users a realistic middle ground, tightens the screws on competitors, and accelerates the transition from “AI as magic” to “AI as metered utility.” The real question for teams — especially in Europe — is simple: do you measure the productivity lift from these tools well enough to justify putting them next to cloud and GitHub on your monthly spend?



