The Handshake That Didn’t Happen: What Altman vs. Amodei in India Really Tells Us About AI Power

February 19, 2026
5 min read
Sam Altman and Dario Amodei standing apart on stage at India’s AI summit as other leaders hold hands in solidarity

1. Headline & intro

On most days, AI geopolitics is invisible: API contracts, model benchmarks, quiet lobbying. In New Delhi this week, it became a body-language meme. When India’s prime minister asked AI leaders on stage to join hands in a symbolic gesture, everyone complied—except OpenAI’s Sam Altman and Anthropic’s Dario Amodei, who left a clear gap between them.

That one missing handshake says more about the current state of AI than a dozen policy papers. In this piece, we’ll unpack what that awkward moment reveals about the business model war between labs, India’s bid to become an AI super-hub, and why Europe should be paying very close attention.

2. The news in brief

According to TechCrunch, the incident took place at the India AI Impact Summit 2026 in New Delhi, where Prime Minister Narendra Modi invited executives on stage to hold hands and raise them as a sign of unity around AI innovation.

All the leaders present reportedly joined hands—except Sam Altman, CEO of OpenAI, and Dario Amodei, CEO of Anthropic. The two, who head rival frontier AI labs, kept their hands apart, creating a visibly awkward gap.

TechCrunch notes this comes after months of escalating rivalry. OpenAI recently announced it would introduce advertising into ChatGPT. Anthropic then ran Super Bowl ads implying Claude would never show ads, presenting that as a matter of principle. Altman later criticised Anthropic’s campaign on X, describing it as misrepresenting OpenAI’s plans and labelling the approach as both dishonest and authoritarian.

Both companies used the New Delhi summit to showcase their India strategy. OpenAI announced two new offices in India, a partnership with IT giant TCS, and education-focused tools. Anthropic, meanwhile, opened its own India office and struck a deal with Infosys for internal and client-facing deployments of its AI tools.

3. Why this matters

The non-handshake is not about personal dislike; it’s about a deepening structural conflict in AI.

First, this is a business model clash dressed up as ethics. OpenAI is experimenting with advertising inside conversational AI. Anthropic is positioning itself as the anti-ad model, suggesting that mixing ads with AI assistants inevitably compromises user trust. That’s not just comms—it’s a bid to lock in different coalitions of customers and regulators. Enterprises that already fear data exploitation will be very receptive to Anthropic’s framing.

Second, the India setting is crucial. By announcing offices and major IT partnerships at the same event, both labs are effectively treating India as the next strategic battleground. TCS and Infosys are not just local partners; they are global system integrators that will decide which model is wired into banks, telcos, and governments worldwide. Getting into their default toolkits is worth far more than any Super Bowl commercial.

Third, the optics undermine the narrative of a unified AI community on safety and governance. Governments want to believe that the major labs can at least collaborate on risk mitigation. When the two most prominent US labs literally refuse to join hands during a high-profile diplomatic moment, it reinforces the suspicion that competition will always trump coordination.

Winners in the short term are India’s political leadership and its IT giants, which have successfully turned global lab rivalry into local leverage. The potential losers are users and regulators, who now have to navigate not just technical risks but also increasingly ideological marketing wars about what “responsible AI” even means.

4. The bigger picture

This moment fits into three broader trends.

1. Frontier labs are becoming consumer brands.

For years, OpenAI and Anthropic were infrastructure players known mainly to developers. That changed once ChatGPT and Claude became mainstream products. Super Bowl ads, public jabs on X, and symbolic gestures on stage show that frontier AI is now fighting a classic brand war: safety vs speed, ads vs purity, open vs cautious. The hand gap in New Delhi is the physical manifestation of those brand stories.

2. AI geopolitics is going multipolar.

For a while, the narrative was Sino‑American: US labs vs Chinese tech giants. In reality, the map is more complex. The India AI Impact Summit demonstrates New Delhi’s ambition to be a third pole: not just a market, but a venue where AI’s most powerful actors court political favour and industrial allies. That resembles how Davos turned into a ritual stop for global finance.

Europe, despite hosting leading research groups and enacting the AI Act, has not yet created an equally magnetic AI stage that both OpenAI and Anthropic must attend to announce their big moves.

3. Safety talk is colliding with commercial urgency.

Both OpenAI and Anthropic built their reputations on talking openly about AI risks. But the market’s patience is limited. Enterprises want clear product roadmaps, monetisation plans, and integration paths. Altman’s willingness to experiment with ads signals a readiness to sacrifice some purist ideals for scale. Anthropic’s absolutist “no ads” stance is also a business move: it turns restraint into a premium feature.

We’ve seen versions of this story before in social media, search, and mobile platforms. Ideals of user-first design give way to aggressive growth experiments. The question is whether AI can avoid repeating the surveillance-capitalism playbook—or just repackage it with a chat interface.

5. The European / regional angle

For European readers, the uncomfortable truth is this: the defining symbolic moment of 2026’s AI politics did not happen in Brussels, Berlin, or Paris. It happened in New Delhi, between two US CEOs, on an Indian stage.

Europe has chosen a regulation-first strategy with the AI Act, the Digital Markets Act, and long-established privacy rules like GDPR. That gives the EU leverage over how AI is deployed inside its borders, especially around transparency, data minimisation, and high‑risk use cases.

But governance without gravitational pull is not enough. While India courts both OpenAI and Anthropic with market access and partnerships through TCS and Infosys, Europe has yet to orchestrate an equivalent pan‑European platform that makes it a mandatory stop for AI announcements. Individual events in Paris, London, or Berlin exist, but they do not yet act as a single geopolitical theatre.

For European enterprises, the India deals should be a wake‑up call. The same integrators that now embed OpenAI and Anthropic across Indian clients are active in Europe as well. If European companies do not deliberately push for local or EU‑aligned alternatives—be that Mistral, Aleph Alpha, DeepL, or open‑source stacks—they may find their AI infrastructure quietly standardised around US labs via global service contracts.

6. Looking ahead

Several things are worth watching in the coming 12–24 months.

1. The ad model experiment.

If OpenAI proceeds with ads in ChatGPT, the implementation details will matter more than the headlines. Contextual assistance for shopping or travel inside a chat could be tolerable; opaque, behaviour‑shaping ad optimisation would trigger regulatory backlash, especially in the EU. Anthropic will be under pressure to prove that a no‑ads stance is commercially sustainable rather than a temporary marketing hook.

2. The integrator battleground.

TCS and Infosys are now committed partners for each lab in India. Expect similar moves elsewhere: Accenture, Capgemini, Deloitte, and others will be courted to pick a “preferred” frontier model stack, and they may well choose different partners by region. Europe should expect intense lobbying around public-sector and critical‑infrastructure contracts.

3. Safety cooperation vs cold war.

Public spats and awkward stage moments make it harder to believe in sincere safety collaboration between labs. Yet governments increasingly demand joint commitments on model evaluations, red‑teaming, and incident reporting. The open question is whether OpenAI and Anthropic can maintain a thin layer of technical cooperation on safety while escalating commercial confrontation everywhere else.

4. India as a template.

If India successfully uses its market size to extract offices, R&D commitments, and favourable partnerships from both labs, expect other regions—from the Gulf to Latin America—to try similar playbooks. Europe has the advantage of regulatory heft, but has been slower to act as a unified commercial counterpart.

7. The bottom line

The missing handshake in New Delhi is more than an awkward photograph. It crystallises a turning point: frontier AI is leaving its research‑lab phase and becoming a global industrial rivalry, fought through ad models, systems integrators, and geopolitical stages.

If users, companies, and regulators do not actively shape that competition, the default outcome will look uncomfortably like previous tech eras—just with more powerful tools. The open question is whether we want AI’s future to be defined by who refuses to hold whose hand, or by the guardrails we insist on together.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.