Perplexity’s Incognito Problem: When AI Hype Crashes Into Old-School Ad Tracking

April 2, 2026
5 min read
Illustration of an AI chat interface with incognito icon overlaid by ad tracking logos

1. Headline & intro

AI search was supposed to feel safer than typing your fears into a public search box. Instead, Perplexity now finds itself at the center of a lawsuit that claims its Incognito Mode is little more than branding wrapped around old-fashioned surveillance. If the allegations hold, this is not just a Perplexity problem – it is a warning shot for the entire AI search industry, which quietly leans on the same ad infrastructure as Web 2.0.

In this analysis, we will unpack what the lawsuit actually says, why Incognito-style marketing has become a legal time bomb, how this collides with mounting privacy regulation, and what it signals for AI tools in both the US and Europe.

2. The news in brief

According to reporting by Ars Technica, an anonymous Perplexity user has filed a proposed US class-action lawsuit against Perplexity, Google, and Meta. The complaint alleges that Perplexity’s AI search product systematically shares user chats with Meta and Google via tracking technologies such as Meta Pixel, Google Ads, and DoubleClick.

The suit claims this sharing affects both logged-in and anonymous visitors, including people who explicitly enabled Perplexity’s Incognito Mode. It further alleges that personally identifiable information – such as email addresses – and highly sensitive content (financial, legal, health-related conversations) were transmitted alongside chat transcripts.

The proposed class would cover certain US users whose chats were allegedly shared from December 7, 2022, to February 4, 2026, with an additional subclass for California residents. The plaintiff argues that this conduct violates state and federal privacy and wiretapping laws and seeks injunctive relief, statutory and punitive damages, and disgorgement of profits.

3. Why this matters

If even part of these allegations is accurate, this case strikes at the core promise of AI assistants: that you can ask anything, however intimate, in a semi-private space. Users treat AI chatbots like a cross between a therapist, a lawyer, and a search engine. That trust is not just emotional; it is commercial infrastructure. Break it, and the whole category takes a reputational hit.

The immediate winners, at least in the short term, are competing AI providers that can plausibly claim stricter data separation – particularly players positioning themselves as privacy-first or subscription-based rather than ad-driven. The losers are not only Perplexity and its investors, but also Google and Meta, who once again look like invisible data plumbing behind almost everything online.

The alleged behavior also undercuts the meaning of privacy controls. An Incognito Mode that does not prevent third‑party tracking is more than a UX flaw; it is a legal liability. We have already seen this movie with the litigation around Chrome’s Incognito mode, where courts scrutinized what a “private” label reasonably leads users to expect. If Perplexity really shared chats plus identifiers while marketing a mode that suggests anonymity, it invites accusations of deceptive design.

More broadly, the case highlights a structural tension: AI companies want rich, real-world data to improve models and monetise usage, but they are building on top of a web advertising stack that was never designed for deeply sensitive conversational logs. The result is a Frankenstein mix of cutting-edge AI and decade-old tracking tricks.

4. The bigger picture

This lawsuit slots into a wider pattern: AI products shipping fast, with privacy architecture bolted on later – if at all.

We have already seen:

  • ChatGPT conversations surfacing in legal discovery and being exposed via search and analytics tools.
  • Multiple healthcare and telemedicine providers sued in the US for embedding Meta Pixel or Google trackers on patient portals.
  • Ongoing debates about whether data sent to analytics platforms constitutes a “sale” or “sharing” of personal data under various privacy laws.

Perplexity’s case is in some ways worse than simple web tracking, because conversational AI invites users to upload documents, lab results, contracts, or detailed life stories. The informational density of one AI chat can exceed dozens of standard web page views.

Competitively, the industry is split. On one side, ad-funded giants like Google and Meta are weaving AI into their existing tracking ecosystems. On the other, subscription-focused tools (from productivity assistants to self-hosted models) are pitching data minimisation as a selling point. If courts treat AI chat logs as especially sensitive – closer to medical or legal records than random browsing – that second camp gains a powerful tailwind.

Historically, tech companies have tried to resolve privacy disputes with narrow settlements and minor UI tweaks. But AI may be different, because misuse can feel viscerally harmful: a mistargeted health ad based on an AI conversation is far more unsettling than a generic cookie-based banner.

5. The European / regional angle

For European users, the Perplexity lawsuit is a loud reminder that US-based AI tools are still wired into US ad ecosystems – and therefore into business models European regulators have been trying to tame for a decade.

Under the GDPR, sending granular chat transcripts containing health or financial details to third parties for advertising or analytics would almost certainly involve processing special-category data, which requires explicit, informed consent and strict safeguards. The allegation that Perplexity hides its privacy policy link and never obtains explicit agreement would be toxic in an EU context.

Supervisory authorities across the EU have already shown they are willing to go after AI services. Italy briefly blocked ChatGPT in 2023 over transparency and legal-basis issues. Several data protection authorities have questioned whether US analytics tools can be used at all without breaching cross-border data transfer rules.

Add to this the Digital Services Act and the upcoming enforcement phases of the EU AI Act, which push for transparency, risk management, and restrictions around profiling. An AI search engine quietly piping rich, sensitive content to ad networks is exactly the kind of pattern that European regulators are gearing up to challenge.

For European companies building AI products, the message is clear: copying Silicon Valley’s “instrument everything and ask forgiveness later” approach is no longer an option. Data minimisation, on-device processing where possible, and truly optional analytics will become competitive advantages, not just compliance chores.

6. Looking ahead

Legally, the first milestone will be whether the US court allows the proposed class to proceed. Discovery could expose internal discussions about Incognito Mode, tracking choices, and what executives believed users understood. That evidence will matter far beyond this one case, because it will help define what counts as reasonable user expectations in AI interfaces.

Expect several possible trajectories:

  • A negotiated settlement with design changes: clearer disclosures, a genuine tracking-free mode, and tighter restrictions on what is sent to Google and Meta.
  • Copycat lawsuits against other AI providers using similar trackers, especially where products invite uploads of medical, financial, or legal documents.
  • More aggressive guidance from regulators (in the US, FTC; in Europe, DPAs and the European Data Protection Board) on how existing privacy law applies to AI chat logs.

For users, the short-term advice is uncomfortable but simple: assume that anything you type into a cloud-based AI tool may be logged, analysed, and potentially shared. That does not mean never using these tools – but it does mean thinking twice before pasting your tax return, diagnosis, or client list into a random chatbot.

In the medium term, we should expect a market split: high-friction but truly private assistants (self-hosted, enterprise, or paid) and ultra-convenient but heavily instrumented free tools. Where Perplexity lands after this case will be a test of which path the mainstream is willing to pay for.

7. The bottom line

The Perplexity Incognito lawsuit is not just about one company misusing a label; it is about whether AI assistants inherit the worst habits of the ad-tech era or force the industry to grow up. If courts and regulators treat conversational data as uniquely sensitive, AI firms will have to rebuild their tracking stacks from first principles. Until then, users should treat “private” modes in AI products as marketing claims, not guarantees – and ask themselves whether convenience is worth the trail they leave behind.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.