Uber’s ‘Dara AI’ Shows What Happens When the Boss Becomes a Model
Uber engineers have quietly built an internal chatbot that imitates their own CEO, Dara Khosrowshahi. On the surface it’s a fun anecdote from a podcast. In reality, it’s an early glimpse of something much bigger: AI not only writing code and emails, but actively shaping how decisions are framed, escalated and justified inside companies. When the boss is turned into a model, power doesn’t disappear — it gets encoded. In this piece, we’ll unpack what Uber’s experiment really signals for engineering culture, management, and especially for heavily regulated European markets watching algorithmic management with growing unease.
The news in brief
According to TechCrunch, Uber CEO Dara Khosrowshahi recently revealed on The Diary of a CEO podcast that some internal engineering teams have created an AI version of him, informally known as “Dara AI.”
Engineers reportedly use this chatbot to rehearse presentations and proposals before meeting Khosrowshahi in person. The idea: if your deck can survive questioning from the simulated Dara, you’re better prepared for the real one.
Khosrowshahi also said that about 90% of Uber’s software engineers are now using AI tools in their work, and roughly 30% are heavy or “power” users who are rethinking system architecture with AI assistance. He described the productivity gains from these tools as unlike anything he has seen before in his career.
The initial detail about the “Dara AI” had previously surfaced in Business Insider; TechCrunch highlighted it as part of a broader discussion about AI adoption inside Uber.
Why this matters
Turning the CEO into a chatbot is more than a quirky engineering side project; it’s a window into how AI is beginning to rewrite internal power dynamics.
First, it changes how people prepare for leadership. Instead of guessing “What will Dara ask?” or relying on a friendly VP for coaching, teams can stress‑test their ideas against a digital stand‑in that has been tuned to mimic his style, questions and priorities. That makes leadership more “scalable”: one executive’s mental model can influence hundreds of decisions without them being in the room.
The winners are obvious: ambitious teams that learn to speak the model’s language will move faster and land more approvals. Junior staff get a safe environment to practice high‑stakes conversations. And for the real Dara, meetings may become sharper and more focused because the groundwork has already been done.
But there are hidden costs. If the AI is trained to reflect the current CEO’s preferences, you risk hard‑coding today’s biases and blind spots into tomorrow’s decisions. People stop challenging assumptions and instead optimize for “what the Dara model likes.” Diversity of thought can quietly erode.
It also raises a question about middle management. If teams increasingly prepare, refine and even partially negotiate decisions with AI agents, some of the traditional coaching and filtering work that managers do is displaced. The same company that pioneered algorithmic management for drivers may now be experimenting with a softer, internal version for white‑collar staff.
The bigger picture
Uber’s “Dara AI” fits into a broader shift: AI is moving from being a generic assistant (autocomplete for code, emails, documents) to becoming organization‑specific infrastructure.
We already see this trend elsewhere. GitHub Copilot changed how developers write code; Microsoft is wiring Copilot into Office 365 so it can summarize meetings and draft strategy documents based on internal data. Big tech firms are building internal LLMs fine‑tuned on company knowledge to answer policy questions or generate product ideas.
What’s different here is the personalization: instead of just “Uber AI,” it’s “Dara AI.” Not a neutral assistant, but a model of a specific leader. That’s a step toward “digital twins” of executives, product owners or even regulators, used to test scenarios before humans get involved.
We’ve seen crude versions of this before. Consultants build Excel models of “how the CFO thinks” to predict which projects will get funded. Sales teams have always coached each other on how to pitch to a particular manager. The novelty is that this intuition is now becoming a persistent, queryable system anyone can ping 24/7.
Competitively, this could become table stakes. If one company can encode its leadership logic in AI and propagate it instantly, it can align decision‑making faster than rivals who rely on slow, human‑only communication chains. On the other hand, companies that over‑optimize around their internal models may react slowly to external change because their AI keeps reinforcing yesterday’s worldview.
The long‑term industry direction seems clear: every large organization will have internal models tuned not just on documents, but on culture, tone and political reality. The open question is whether those models will empower people to challenge the status quo — or quietly enforce it.
The European / regional angle
For Europe, this story immediately touches on regulation and culture.
Under the upcoming EU AI Act, AI systems used in managing workers or influencing their career prospects can fall into the “high‑risk” category, triggering strict requirements on transparency, risk assessment and human oversight. Today, Uber’s “Dara AI” is described as a prep tool, not a formal evaluation mechanism. But the line can blur quickly: if proposals pre‑screened by the AI systematically get better outcomes, employees could reasonably see it as a de facto filter for advancement.
In GDPR‑heavy Europe, any such system tuned on internal communications has to reckon with data protection. What data about employees, performance or past conflicts is the model implicitly absorbing? Can workers ask what the “Dara AI” knows about them, or correct it? These are not theoretical questions for European works councils, especially in countries like Germany where co‑determination is strong and algorithmic decision‑making in HR is already under scrutiny.
There’s also a market angle. European enterprises — from Berlin scale‑ups to industrial giants and banks — are building their own internal copilots on top of open‑source or commercial models hosted in the EU. A “CEO twin” is the kind of feature that might appeal to fast‑moving startups in London or Tallinn, but it will meet a more skeptical audience in Frankfurt or Paris, where management culture is more formal and legal departments are conservative.
For European tech companies, the opportunity is clear: tools that provide similar preparation and coaching benefits without turning into opaque, quasi‑authoritative boss simulations.
Looking ahead
If engineers can build “Dara AI” as a side project, imagine what happens when companies fund this idea properly.
In the next two to three years, expect more “executive clones”: AI agents trained on a leader’s emails, town hall recordings, strategy docs and Q&A sessions. Official or not, employees will start to use them to sanity‑check proposals: “What would our COO say about this?”
Boards may like this: the leadership style of a successful CEO becomes, in theory, immortal. Even after an executive leaves, their model could be used to compare new strategies against the old guard’s philosophy. That’s powerful — and faintly dystopian.
Key questions to watch:
- Does “Dara AI” (or future variants) ever get wired into actual decision workflows — for example, scoring PR risk, product bets or budget requests?
- Will HR and legal approve models tuned on internal interpersonal dynamics, or push back hard in jurisdictions like the EU?
- Do employees start building their own AI replicas — of themselves — to automate their communication, creating a strange world where AI versions of people negotiate with AI versions of their bosses?
The biggest risk is subtle: if these models become the default lens through which ideas are judged, organizations may narrow their thinking without noticing. The biggest opportunity is the opposite — using these tools to expose blind spots and train people to challenge authority more effectively.
The bottom line
Uber’s “Dara AI” is not just a party trick; it’s a prototype of AI‑mediated management. Encoding the CEO’s mind into a model can sharpen preparation and accelerate alignment, but it also risks freezing a single worldview into the company’s operating system. As more firms build their own leadership clones, especially in heavily regulated Europe, the real test will be whether these systems amplify human judgment or quietly replace it. If your company could build an AI copy of your boss — or of you — would that make your work better, or just more predictable?



