New Relic’s AI agents show where observability is really heading

February 24, 2026
5 min read
Dashboard with AI-powered observability agents monitoring application telemetry
  1. HEADLINE & INTRO

New Relic’s latest launch is more than another “we added AI” press release. By combining an AI agent platform with first‑class OpenTelemetry (OTel) support, the company is quietly redrawing the observability stack. Instead of humans staring at dashboards, New Relic is betting on fleets of domain‑specific agents that watch telemetry, decide what matters and trigger action — all while sitting on top of open standards.

The move raises big questions: who will own the AI control plane for production systems, how open will that future really be, and what does this mean for enterprises already juggling Datadog, Dynatrace, Prometheus and a half‑dozen homegrown tools?

  1. THE NEWS IN BRIEF

According to TechCrunch, New Relic has introduced two major additions to its platform.

First, a no‑code “New Relic Agentic Platform” that lets enterprises assemble AI agents focused on observability tasks. These agents monitor application and infrastructure data for bugs and issues, can be deployed from pre‑built templates, and can also orchestrate and manage existing bots. The platform supports the Model Context Protocol (MCP), allowing AI agents to connect safely to external data sources and other New Relic tools.

Second, New Relic has upgraded its application performance monitoring (APM) agents with integrated OpenTelemetry (OTel) capabilities. Enterprises can now send OTel data directly into New Relic and manage those streams alongside other telemetry in a single place. The company argues this reduces the operational burden of running separate OTel collectors and addresses fragmentation that has slowed wider OTel adoption.

  1. WHY THIS MATTERS

This launch is significant because it attacks two of the biggest pain points in modern operations at once:

  • Too many tools, not enough people
  • Too much data, not enough action

The AI agent platform is New Relic’s answer to the AIOps hype cycle — but with a narrower, more credible focus. Rather than promising a general‑purpose AI that magically fixes everything, New Relic is framing these agents as specialists for observability outcomes: detecting regressions, correlating signals, opening incidents, triggering runbooks.

Beneficiaries:

  • Existing New Relic customers get a path to AI‑driven operations without building their own agent frameworks or wiring LLMs into production systems.
  • Smaller teams without in‑house ML or SRE depth can “rent” intelligent automation instead of hiring scarce experts.

Potential losers:

  • Independent OTel vendors and collector projects that monetise the complexity of running telemetry pipelines; if New Relic makes “just send us your OTel” the default, some of that value disappears.
  • Generic AIOps platforms that sit above monitoring tools; if observability vendors offer credible agent layers themselves, the room for neutral “AI on top of everything” shrinks.

Strategically, this is New Relic doubling down on a classic platform play: welcome open standards at the edge (OTel, MCP) while keeping the high‑value intelligence — the AI agents and correlated data — inside its own ecosystem.

  1. THE BIGGER PICTURE

New Relic’s move sits squarely in a broader shift toward “agentic” infrastructure operations.

Salesforce’s Agentforce (late 2024) and OpenAI’s Frontier tooling earlier this year pushed the idea that production systems will soon be stewarded by swarms of semi‑autonomous agents. Gartner now describes agent platforms as “necessary infrastructure” for enterprise AI roll‑out. Observability is a natural early target: it is data‑rich, process‑heavy and already full of routine, automatable decisions.

We have been here before in weaker form. The 2010s promised “self‑healing infrastructure” via runbooks, rule engines and early AIOps. Those systems were brittle and noisy because they couldn’t really understand context. Today’s LLM‑backed agents, combined with rich telemetry, can interpret logs, metrics and traces in near‑real time, compare against architectural knowledge and propose — or even execute — remediations.

Competitively, New Relic is catching up and differentiating at the same time:

  • Datadog has AI assistants and automation workflows;
  • Dynatrace markets its Davis AI as an early AIOps brain;
  • Elastic embeds generative AI into search and observability.

Where New Relic stands out is the explicit embrace of MCP and the tight coupling with OTel. That combination points toward an ecosystem where:

  • OTel standardises how telemetry is collected.
  • MCP standardises how AI agents talk to tools and data sources.
  • Vendors compete on what their agents can actually do with all that.

In other words, the battle is moving from data ingestion to AI‑driven decision‑making.

  1. THE EUROPEAN / REGIONAL ANGLE

For European enterprises, this announcement intersects directly with two realities: stringent regulation and a chronic skills shortage in operations.

EU rules like NIS2 and the Digital Operational Resilience Act (DORA) are pushing critical sectors — finance, energy, healthcare — to prove that they can detect, respond to and report incidents quickly. AI observability agents that can surface anomalies, help generate incident reports and cross‑reference telemetry with compliance runbooks are almost purpose‑built for that world.

At the same time, EU organisations are wary of US cloud vendors because of GDPR, Schrems II and data‑transfer concerns. Telemetry is not always “anonymous”; logs and traces can contain user identifiers, IP addresses and payload fragments. Routing all OTel data into a US‑based SaaS platform may clash with internal policies or local regulators’ expectations.

That tension creates space for European alternatives. Companies like Elastic (with European roots), plus regional monitoring tools and sovereign‑cloud providers, can position themselves as “AI‑powered but data‑sovereign” observability platforms. Expect them to lean heavily on OTel compatibility and on‑prem / EU‑only deployment options.

For European CIOs, the key question will be: can New Relic’s agent platform be deployed and governed in a way that satisfies not just security teams, but also data‑protection officers and upcoming EU AI Act requirements around transparency and human oversight?

  1. LOOKING AHEAD

Over the next 12–24 months, expect three things to happen.

1. Agent platforms will specialise by domain.
We’ll see separate ecosystems for observability agents, security agents, CRM agents and so on. New Relic is clearly betting on the observability vertical: deep rather than broad. Success will depend on whether these agents actually reduce MTTR and toil, or just add another layer of noisy alerts.

2. Governance and safety will move centre‑stage.
Right now, most vendors market “AI copilots” that suggest actions. The real savings arrive when agents can execute changes — scale a cluster, roll back a release, throttle traffic. At that point, regulators and internal risk teams will demand:

  • clear audit trails of AI decisions;
  • robust guardrails (e.g. safe‑mode, change windows);
  • mechanisms to prove that agents behave as designed.

Vendors that can map these controls to EU AI Act obligations will gain an edge in Europe.

3. OpenTelemetry will quietly become non‑optional.
By baking OTel into its APM agents and offering to run the collectors, New Relic is conceding that the standard has won — and turning that into an on‑ramp. Others will follow. Within a few years, building bespoke telemetry pipelines will look as anachronistic as writing your own TCP stack.

The unanswered question: will enterprises let one vendor own both the observability data plane and the AI control plane, or will they insist on a more modular, multi‑vendor architecture?

  1. THE BOTTOM LINE

New Relic’s AI agent and OTel push is a smart, defensive‑and‑offensive move in a crowded market: embrace open standards at the edges, add proprietary intelligence in the middle and hope customers stay for the automation, not just the dashboards.

Enterprises should experiment aggressively — but treat these agents like junior SREs, not infallible oracles: supervise them, restrict their permissions and demand evidence that they really reduce incidents. The real question for readers: do you want your next major outage investigated by a human first, or by an AI that has already rewritten parts of your runbook?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.