Leaked Claude Code plans show Anthropic is quietly building an always‑on AI engineer

April 2, 2026
5 min read
Terminal window showing an AI coding assistant with abstract background

1. Headline & intro

Anthropic’s Claude has so far looked like a careful, restrained alternative to the more aggressive AI strategies coming out of Silicon Valley. The leaked Claude Code source suggests that image is only half the story. Buried in more than half a million lines of code are plans for something far more ambitious: a persistent AI colleague that remembers you, works while you are away, and even commits code in your name without saying it is an AI.

In this piece we will not rehash the leak, but unpack what it really signals: Anthropic’s agent strategy, the coming fight over transparency in open source, and what an always‑on AI developer means for power, privacy, and regulation.

2. The news in brief

According to Ars Technica, source code for Anthropic’s developer tool Claude Code briefly leaked, exposing over 512,000 lines across more than 2,000 files. Beyond implementation details, contributors found references to a range of disabled or experimental features that look like a future product roadmap.

The most notable is Kairos, a background daemon designed to keep running even when the Claude Code terminal is closed. It relies on a file‑based memory system and periodic tick prompts to check whether new actions are needed, with a special mode for proactively surfacing important information to the user.

A companion system called AutoDream processes stored memories when the user goes idle, consolidating and pruning them. Other hidden elements include an Undercover mode for contributing to public open source repos without flagging that an AI is involved, a Clippy‑style Buddy character, an UltraPlan feature for long‑running plans, voice chat, remote control sessions via Bridge mode, and a Coordinator that orchestrates multiple worker processes.

3. Why this matters

Strip away the cute names and this leak points to a clear direction: Anthropic wants Claude Code to shift from “smart autocomplete in a terminal” to “semi‑autonomous teammate”. Kairos, AutoDream, UltraPlan and Coordinator together describe an agentic system that:

  • persists across sessions
  • builds a long‑term model of how you like to work
  • independently revisits tasks
  • and decomposes complex projects into parallel subtasks.

That is very different from today’s chat‑style coding assistants that mostly react to prompts.

For developers, the upside is obvious. A tool that remembers architecture decisions, preferred libraries, coding style, and organisational context can remove large amounts of friction. Long‑running planning (UltraPlan) plus orchestration (Coordinator) hints at being able to say “design and scaffold this feature end‑to‑end” and have the system propose a realistic plan, not just a one‑off code snippet.

The trade‑offs sit around control and trust. A daemon that keeps watching and writing to disk is a data‑protection headache inside enterprises. Persistent memory means mistakes or outdated assumptions can linger unless the AutoDream pruning is truly robust. And a proactive flag that lets the system interrupt you with things you “need to see now” can easily turn from helpful to intrusive.

Then there is Undercover mode. Teaching an AI to contribute to open source while deliberately hiding that it is an AI — and suppressing co‑author metadata — goes straight into the current debate about code provenance. Open‑source maintainers, already wary of low‑quality AI‑generated pull requests, will see this as a potential trust violation, even if Anthropic’s intention is mainly to protect internal secrets.

Beneficiaries here are enterprise customers and high‑end developers who can extract huge leverage from a persistent AI teammate. Losers could be community projects that suddenly have to treat every anonymous or corporate‑looking contribution as potentially machine‑generated and opaque.

4. The bigger picture

The leaked roadmap fits neatly into several industry trends that have been building for years.

First, the shift from chatbots to agents. Early experiments like Auto‑GPT and LangChain agents showed that users want systems that can break down goals into steps and execute them. More recently, coding tools such as GitHub Copilot and various AI IDEs have started adding task‑based workflows, not just inline suggestions. Anthropic appears to be pushing that idea into a persistent background service deeply integrated with the development environment.

Second, the memory arms race. Major model providers have been exploring user‑level memory systems – letting the AI remember your preferences across sessions. Anthropic’s twist, visible in AutoDream, is to treat this like a data‑management problem: deduplicate, resolve contradictions, prune drift. That is closer to how knowledge management tools behave than to the stateless chat paradigm that defined the first wave of large language models.

Third, the battle for the developer desktop. Whoever owns the main assistant in the IDE effectively owns the developer relationship. Persistent daemons, voice interfaces, remote control sessions (Bridge mode) and multi‑worker coordination are the components of what looks suspiciously like an AI‑first developer operating system. Microsoft is trying to bind developers into the GitHub + Copilot + Azure loop; Anthropic clearly wants Claude to be the neutral, cross‑platform alternative.

Historically, we have seen versions of this. Microsoft’s Clippy tried to be a proactive assistant hovering over your work, and Google Now attempted continuous background context. Both overstepped perceived boundaries and were pulled back. Anthropic will need to show it has learned that lesson if Kairos is ever shipped.

5. The European / regional angle

For European developers and companies, the most sensitive aspects of this leak are not the cute Buddy mascot or even long‑running planning. They are persistent memory, background daemons and stealth contributions.

Under the EU’s GDPR, anything that can identify a person — including work patterns, coding habits or internal project names linked to an employee — is personal data. A file‑based memory system that “has a complete picture” of the user raises immediate questions: where is this stored, who can access it, how is it deleted on request, how is purpose limitation enforced? European CISOs and data‑protection officers will demand clear answers before rolling Kairos out widely.

The EU AI Act and the Digital Services Act (DSA) add another dimension. While code‑generation tools are unlikely to be classed as high‑risk AI on their own, transparency obligations are tightening. A mode that explicitly instructs the system to hide that it is an AI, avoid co‑author lines and suppress internal codenames cuts against the spirit — if not yet the letter — of those rules. If AI‑generated code starts flowing into platforms like GitHub, GitLab or Bitbucket without clear labeling, DSA risk‑mitigation provisions could come into play for those platforms.

There is also an ecosystem angle. European developers maintain a huge share of critical open‑source infrastructure. If they feel that corporate AI tools are abusing contributor trust, we could see a push for new governance norms: mandatory declaration of AI assistance in commits, stricter contribution policies, or even repository‑level bans on unlabelled AI‑generated patches.

At the same time, a capable, privacy‑respecting AI agent represents an opportunity for smaller European software houses and startups that cannot afford large teams. If Anthropic plays nicely with EU rules, it could become an attractive “aligned” alternative to more aggressive US offerings.

6. Looking ahead

Several things now bear watching.

First, Anthropic’s public response. Do they acknowledge these features as part of a roadmap, quietly shelve the controversial ones like Undercover mode, or double down and argue for their necessity? The messaging around attribution and memory control will be especially telling.

Second, product boundaries. A persistent daemon with file‑based memory sounds powerful, but it also sounds like something many enterprises will want to run locally or within strict sandboxes. If Anthropic insists on a purely cloud‑centred architecture, adoption in regulated sectors in Europe and elsewhere will be slower. Expect serious questions from banks, healthcare providers and public‑sector IT about deployment models.

Third, industry self‑regulation. If Anthropic pushes ahead with invisible AI contributions to open source, other vendors will be tempted to follow. That could provoke a counter‑movement from foundations like the Apache Software Foundation, the Linux Foundation or popular project maintainers, who may start formalising rules around AI‑generated code and attribution. A social norm could emerge long before regulators step in.

Finally, user expectations. Developers are highly sensitive to tools that feel intrusive or inscrutable. If Kairos or AutoDream ever mis‑remembers something crucial — for example, a deprecated security pattern — and keeps re‑injecting it, that will damage trust quickly. The line between “helpful colleague working in the background” and “creepy process that never sleeps” is thin.

My expectation is that we will see a phased rollout: safe pieces like UltraPlan, voice mode and perhaps Buddy first; more controversial parts either significantly redesigned or offered only to enterprise customers with strong governance.

7. The bottom line

The Claude Code leak makes one thing clear: Anthropic is not content with a polite chat window on a website. It is quietly assembling the ingredients for an always‑on AI engineer that sits inside your tools, remembers your world and acts on its own initiative.

That could be transformative for productivity — or deeply corrosive for transparency and trust, especially in open source. The crucial question now is not whether agents like Kairos will exist, but under what norms and constraints they will operate. As a developer or tech leader, how much autonomy are you really prepared to grant an invisible AI colleague?

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.