Why big‑name money is betting on agentic cybersecurity

January 29, 2026
5 min read
Illustration of an AI shield scanning the internet for fake accounts and domains

Why big‑name money is betting on agentic cybersecurity

AI hasn’t just made scams more convincing – it has industrialised them. Brand impersonation, fake apps, spoofed domains and fraudulent ads now move at machine speed, while most corporate defences still move at human speed. That gap is where Outtake wants to live. The company’s fresh funding round matters less for its size and more for who is lining up behind a very specific idea: that autonomous AI agents will run the next generation of cybersecurity. In this piece we’ll look at what Outtake is really selling, why tech royalty is backing it, and what this means for enterprises – especially in Europe – over the next few years.

The news in brief

According to TechCrunch, U.S.-based startup Outtake has raised a $40 million Series B round to scale its AI-driven cybersecurity platform. The round is led by Iconiq Capital partner Murali Joshi – known for bets on Anthropic, Datadog, Drata and 1Password – with an unusually star‑studded list of angel investors, including Microsoft CEO Satya Nadella, Palo Alto Networks CEO Nikesh Arora, hedge fund manager Bill Ackman, Palantir CTO Shyam Sankar, Anduril co‑founder Trae Stephens, former OpenAI VP Bob McGrew, Vercel CEO Guillermo Rauch and former AT&T CEO John Donovan.

Outtake, founded in 2023 by ex‑Palantir engineer Alex Dhillon, focuses on detecting and taking down digital identity fraud at scale: fake corporate accounts, malicious look‑alike domains, rogue mobile apps, fraudulent ads and other forms of online impersonation. TechCrunch reports that customers include OpenAI, Pershing Square, AppLovin and U.S. federal agencies. The company claims sixfold year‑over‑year ARR growth, a more than tenfold increase in customers, and 20 million potential cyberattacks scanned last year.

Why this matters

On the surface, this is yet another AI security startup raise. Underneath, it’s a clear signal that the security stack is shifting from dashboards for humans to swarms of autonomous agents.

Outtake’s core promise is to turn brand and identity protection – historically a labour‑intensive, outsourced service – into a software problem. Instead of analysts hunting for fake accounts and filing tickets with platforms and registrars, AI agents continuously crawl the web, classify suspicious assets and automatically initiate takedowns. If this works reliably at the scale Outtake claims, several things change:

Who benefits?

  • Large consumer brands, financial institutions and platforms get 24/7 defence against increasingly sophisticated impersonation.
  • Security and fraud teams can redeploy humans from low‑value searching and form‑filling to higher‑value investigation and strategy.
  • Cloud and AI platforms (Microsoft, OpenAI, etc.) strengthen their own ecosystems by backing a vendor that protects their customers’ identities.

Who loses?

  • Traditional brand‑protection firms built around manual monitoring and legal workflows.
  • In‑house teams and agencies that still rely on spreadsheets, keyword searches and one‑off reports.
  • Potentially, smaller players who can’t access similar tooling and remain exposed.

The deeper shift is psychological: big enterprises appear increasingly comfortable letting AI not just recommend actions, but trigger enforcement in the wild – taking down domains, ads and accounts without a human in every loop. That raises obvious questions about false positives, due process and appeal, but it also foreshadows how a lot of cybersecurity will look in three to five years.

The bigger picture

Outtake fits into three overlapping trends.

1. The rise of “agentic” cybersecurity.
For years, vendors have promised AI‑powered detection, but the actual workflows remained human‑centric: alerts, tickets, dashboards. The new wave – Outtake on identity, but also players in email, cloud and endpoint security – embeds AI agents that can observe, decide and act: rotating credentials, isolating devices, blocking sessions or, as here, orchestrating takedowns. This is closer to autonomous operations than traditional SIEM + SOC models.

2. The industrialisation of impersonation.
Generative models have made it trivial to mass‑produce convincing landing pages, deepfake profiles and ad creatives. Phishing‑as‑a‑service now bundles infrastructure, content and targeting. The attack surface is no longer just corporate IT; it’s the entire public internet footprint of a brand. Manual brand‑protection workflows simply don’t scale to millions of dynamic assets.

3. Infrastructure investors moving down the stack.
Iconiq and several of the angels typically back infrastructure, defence and foundational AI plays – Anthropic, Palantir, Anduril. Their interest here suggests they see identity protection as a strategic control point in an AI‑mediated internet. Whoever owns the data and feedback loops about what is “legitimate” versus “fraudulent” online gains a powerful position, not unlike what spam filters became for email.

It’s also notable that OpenAI is both a customer and a public case study. Foundation model providers are under pressure to demonstrate “agentic” real‑world applications that justify their compute burn. Startups like Outtake showcase one of the more defensible, enterprise‑friendly uses of reasoning models: scaling something that used to be overwhelmingly manual.

The European / regional angle

For European companies, this is not just a question of better tooling – it intersects directly with regulation.

Under the Digital Services Act (DSA), large platforms operating in the EU must tackle illegal content and scams more aggressively, including fraudulent ads and impersonation. Tools like Outtake’s can become evidence that brands and platforms are taking “appropriate measures” to detect and flag abuse. Expect to see contracts where DSA compliance is a core justification for deploying such systems.

The NIS2 Directive, which EU member states must transpose into national law by October 2024, raises security obligations for essential and important entities (finance, energy, health, digital infrastructure, etc.). Digital identity abuse targeting customers or partners can fall squarely into the “security of network and information systems” these organisations must protect. Automated external threat monitoring will move from nice‑to‑have to audit question.

Then there’s GDPR and the pending EU AI Act. A U.S. startup scanning massive amounts of publicly available data on European users and companies will need to think carefully about data minimisation, legal bases for processing and cross‑border transfers post‑Schrems II. If Outtake or its competitors deploy models that profile individuals, they may fall into high‑risk AI categories, pulling in requirements for transparency, human oversight and robustness documentation.

European alternatives will likely lean into this: building regionally hosted, GDPR‑native offerings, potentially connected with upcoming EU digital identity wallet initiatives. Berlin, Tallinn and the Nordics already have startups exploring continuous external attack surface management with strong privacy guarantees. The opportunity is obvious: pair EU‑grade compliance with agentic automation, and you get a compelling answer for CISOs in Frankfurt or Paris who are wary of shipping half the internet into yet another U.S. data lake.

Looking ahead

If Outtake executes, we should expect several developments over the next 12–24 months.

  1. Deeper integration with platforms. The real power of an agentic takedown engine emerges when it plugs directly into ad networks, app stores, domain registrars, email providers and social platforms. Instead of raising tickets, agents will hit APIs that instantly disable assets. This will require trust, SLAs and auditability – and it will give integrated vendors tremendous leverage.

  2. Codification into compliance checklists. Once a few high‑profile cases show that automated brand and identity protection reduce fraud and regulatory risk, boards and auditors will start asking, “What are we doing here?” Expect Gartner magic quadrants, procurement frameworks and—inevitably—checkbox‑driven deployments.

  3. An arms race of agents. Attackers will not sit still. The same models that power defence can power offensive tools that automatically spawn new domains, rotate content and probe which takedown paths are slowest. Think bot vs. bot, where speed and data advantage decide who wins.

  4. Tough questions on overreach. If an AI agent misclassifies a critical activist campaign or parody site as fraud and takes it down, who is accountable – the brand, the vendor, the platform? European regulators in particular are unlikely to accept “the AI did it” as an excuse. Expect pressure for explainability, appeal mechanisms and logging.

On the business side, a $40 million Series B at this stage is enough to scale go‑to‑market and harden the product, but not enough to dominate the category outright. Competitors will emerge, especially in Europe and Israel. M&A is plausible: large security vendors, clouds or ad platforms may prefer to buy rather than build their own agentic identity‑protection layer.

The bottom line

Outtake’s funding round isn’t about the number; it’s about the narrative. Some of the most powerful figures in tech are betting that AI agents will become the default way we defend digital identity in an internet flooded with synthetic content. That is both promising and uncomfortable: promising because humans can’t keep up with machine‑scale fraud, uncomfortable because we’re delegating yet another layer of judgement to opaque systems and concentrated vendors.

As your organisation leans into AI, a hard question looms: will you trust an external, largely black‑box agent to police how your brand and users appear across the web – or will you insist on keeping more of that capability, and accountability, in‑house?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.