ServiceNow Bets on Multi‑Model AI: What Its Anthropic Deal Really Means

January 29, 2026
5 min read
ServiceNow and Anthropic logos connected by abstract enterprise AI workflow graphics

ServiceNow’s new AI power play

ServiceNow is quietly becoming one of the most important AI gatekeepers in enterprise software. Within a single week, it has signed public deals with both Anthropic and OpenAI — and in the latest move, Anthropic’s Claude becomes the “preferred” engine for its AI workflows and agent builder. This isn’t just another glossy AI partnership announcement; it’s a clear signal about where value is shifting in the AI stack. In this piece, we’ll unpack what the Anthropic deal changes, why the multi‑model strategy matters, and how it will reshape AI adoption in large organisations.


The news in brief

According to TechCrunch, ServiceNow has signed a multi‑year agreement with Anthropic, just a week after announcing a separate partnership with OpenAI. The deal makes Anthropic’s Claude model family the preferred AI models across ServiceNow’s AI‑driven workflow products.

Claude is now the default engine behind ServiceNow Build Agent, the company’s AI agent builder that lets developers create agentic workflows and applications. Anthropic’s models will also be rolled out internally to ServiceNow’s roughly 29,000 employees, including Claude Code for its engineering teams.

ServiceNow declined to disclose the length or financial terms of the agreement. The company’s president Amit Zavery told TechCrunch that ServiceNow is deliberately pursuing a multi‑model approach: OpenAI, Anthropic and others are positioned as complementary options that can be orchestrated under a single AI platform with unified governance, security and auditability.

Anthropic, meanwhile, adds ServiceNow to a growing list of large enterprise partners that already includes Allianz, Accenture, IBM, Deloitte and Snowflake.


Why this matters: the rise of the AI orchestration layer

This deal isn’t mainly about Anthropic “winning” a logo or OpenAI “losing” default status. It’s about where the power sits in the AI value chain.

ServiceNow is positioning itself as the orchestration layer for enterprise AI: the place where companies don’t buy a model, they buy outcomes — automated approvals, resolved tickets, generated knowledge articles, agentic workflows spanning HR, IT and finance.

Who benefits?

  • ServiceNow strengthens its pitch as an AI‑native platform rather than just a workflow database with chat sprinkled on top. If customers come to ServiceNow for AI, they’re less likely to churn to horizontal AI platforms.
  • Anthropic gains deep, sticky integration into a platform already embedded at the heart of many large enterprises’ processes. That’s more defensible than just being another API in a crowded LLM marketplace.
  • Enterprise customers get something they’ve been quietly asking for: model choice without integration pain. Few CIOs want to wire three different LLM vendors into 40 workflows and then explain the governance story to the board.

Who might lose?

  • Point-solution AI startups that only offer a thin wrapper around a single model now face platforms like ServiceNow that can call multiple frontier models and sit closer to system-of-record data.
  • Model vendors without strong enterprise distribution risk becoming commodities if they’re not part of these orchestration hubs.

The immediate implication: AI in the enterprise will be decided less at the model layer and more at the workflow and governance layer. ServiceNow wants to own exactly that.


The bigger picture: from model wars to workflow wars

The Anthropic deal fits into a broader shift: the model wars are ending in a ceasefire, and the workflow wars are beginning.

Over the last two years we’ve seen:

  • Cloud platforms (Microsoft Azure, Google Cloud, AWS) all offering model marketplaces with multiple LLM providers.
  • Data platforms like Databricks and Snowflake integrating not just one, but several commercial and open‑source models.
  • CRM and productivity vendors (Salesforce, HubSpot, Microsoft 365) layering AI across their suites with a similar multi‑model rhetoric.

ServiceNow is now doing the same for enterprise workflows. The message is clear: no single model will dominate every use case, and enterprises don’t want to bet the company on one research lab’s roadmap.

Historically, we’ve seen this pattern before. In the early cloud era, developers chased specific infrastructure features from AWS, Azure, or Google. Over time, the differentiator moved up the stack to platform capabilities (security, integration, governance). Today, AI feels similar: base models are converging in capabilities, and differentiation is shifting to how well those models are embedded in business processes.

Compared to competitors:

  • Microsoft leans on tight OpenAI integration and its own models inside the Microsoft 365 and Dynamics ecosystems.
  • Salesforce uses a “bring your own model” approach under its Einstein umbrella, plus its own models and partnerships.
  • SAP is weaving AI into its ERP and business applications, with a strong focus on regulated industries.

ServiceNow’s twist is to make agent builders and workflows the primary abstraction. You don’t start by choosing a model; you start by defining an outcome and letting the platform orchestrate models behind the scenes. If this works, the AI platform brand (ServiceNow) may matter more to buyers than the underlying model brand in many day‑to‑day decisions.


The European and regional angle

For European organisations, this kind of partnership lands in a very different regulatory and cultural environment than in the US.

The EU AI Act, now entering implementation, will treat many AI systems used in HR, critical infrastructure, finance or public services as high-risk. That means strict requirements around risk management, transparency, data governance, and human oversight. An orchestration platform that already handles workflow audit trails and approvals — like ServiceNow — can become a practical tool for complying with those rules.

However, questions European CIOs will ask include:

  • Where is the data processed and stored? Anthropic’s and OpenAI’s infrastructure must align with EU data residency expectations and Schrems II–driven cloud concerns.
  • Can we document which model was used for which decision? The AI Act and sector regulators will expect traceability; a multi‑model platform must not become a black box.
  • How does this compare to European alternatives? Players like Mistral AI, Aleph Alpha and various national cloud providers are positioning themselves as more sovereignty‑friendly options.

For European ServiceNow customers, the upside is clear: they can experiment with frontier AI capabilities without building their own integration and governance stack. The risk is over‑reliance on a US‑centric ecosystem at a time when Brussels is explicitly pushing for digital sovereignty.


Looking ahead: what to watch next

There are several signals worth tracking over the next 12–24 months.

  1. Real productivity metrics, not just demos. ServiceNow is rolling Claude and Claude Code out to 29,000 employees. If the company can credibly show internal productivity gains — faster ticket resolution, reduced development time, lower manual workload — those numbers will become powerful sales collateral.

  2. Model routing transparency. In a multi‑model world, enterprises will increasingly ask: Which model handled this request, and why? Expect pressure on ServiceNow to surface routing logic, cost controls, and quality metrics, especially for regulated sectors.

  3. Pricing and lock‑in. Today, model choice sounds empowering. But as workflows become deeply entangled with ServiceNow’s AI platform, will switching models — or platforms — remain realistic? Watch for contractual terms around data export, model portability, and bring‑your‑own‑key options.

  4. Regulatory scrutiny of large AI platforms. As regulators move from abstract foundation‑model debates to concrete deployments, platforms that orchestrate decisions across HR, IT, customer service and finance will draw attention. ServiceNow may find itself not just a vendor, but a regulated actor in its own right.

  5. Anthropic’s enterprise positioning. Deep integrations like this one may push Anthropic further into the role of “model vendor for the cautious enterprise”, emphasising safety and reliability over raw benchmark scores. Whether that translates into sustained revenue growth will depend on how many ServiceNow‑scale deals it can replicate.

In short, this partnership is an early test of whether multi‑model orchestration can make AI adoption boring, predictable — and finally, measurable.


The bottom line

ServiceNow’s Anthropic deal, coming on the heels of its OpenAI partnership, is less about picking a winner and more about owning the control plane of enterprise AI. If it succeeds, CIOs will increasingly buy AI as a governed workflow platform, not as a catalogue of models. That’s good for large customers tired of DIY integrations, but it also concentrates power in a handful of orchestration hubs. The real question for enterprises now is simple: do you want your AI strategy to live inside someone else’s platform — and if so, whose?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.