1. Headline & intro
Artificial General Intelligence is usually discussed as a sci‑fi event: one day it “arrives” and everything changes. Databricks co‑founder Matei Zaharia has just thrown a grenade into that narrative by saying: AGI is here already — just not in a human shape. Coming from the inventor of Apache Spark and newly minted ACM Prize in Computing laureate, this isn’t a casual hot take. It’s a signal of where the builders of AI infrastructure think we really are on the curve. In this piece, we’ll unpack what his claim actually means, why it matters for enterprises and researchers, and how Europe should read this moment.
2. The news in brief
According to TechCrunch, Databricks co‑founder and CTO Matei Zaharia has been awarded the 2026 ACM Prize in Computing, one of the most prestigious recognitions in computer science. The Association for Computing Machinery is honouring him primarily for Apache Spark, the open‑source big data engine he initiated during his PhD at UC Berkeley in 2009, which later became the core of Databricks.
Databricks has since evolved into a major data and AI platform, having raised over $20 billion in funding, reaching a reported $134 billion valuation and $5.4 billion in revenue. Zaharia, who still teaches as an associate professor at Berkeley, is donating the $250,000 cash component of the prize to charity.
Speaking to TechCrunch, he argued that AGI – artificial general intelligence – already exists, but not in a way that matches human abilities or intuitions. He warned against treating AI systems as if they were people, highlighting security risks around agentic tools such as the popular assistant OpenClaw. He also emphasised his excitement about AI as a research and engineering tool, from scientific simulations to automated information synthesis.
3. Why this matters
Zaharia’s award is not just a personal milestone; it is a symbolic passing of the torch from the “big data” decade to the “AI+agents” decade.
The winners:
- Infrastructure platforms like Databricks gain legitimacy as the substrate on which the next wave of AI will run. Spark did for data what CUDA did for GPUs: turned raw capability into a usable ecosystem. The ACM prize effectively says that layer is now as historically important as operating systems or relational databases.
- Enterprises with messy data win indirectly. Zaharia’s vision of AI that truly understands and connects internal information assumes you have a coherent data backbone. Vendors that can offer that plus AI-native tooling will have enormous pricing power.
- Researchers and engineers stand to benefit if his forecast of “AI for research” pans out. Automated literature reviews, experimental design, multi‑modal simulations – these can compress years of work into weeks, especially in fields like biology and materials science.
The losers:
- Security‑last AI product design. Zaharia’s critique of agents like OpenClaw as a “security nightmare” should be read as a warning to startups racing to ship browser‑level or OS‑level agents without robust controls. The more we treat agents like trusted human assistants, the more attractive they become as attack surfaces.
- Narratives that place AGI on a distant horizon. If senior systems researchers declare AGI to be “already here” in a non‑anthropocentric sense, it undercuts both the doomer rhetoric (“the singularity is far but catastrophic”) and the complacent one (“we have plenty of time”).
The immediate implication: the frontier debate is shifting from will we reach AGI? to how do we productise, govern and secure the general‑purpose systems we already have? And that change benefits exactly the kind of company Zaharia helped build: platforms that sit between raw models and real‑world workflows.
4. The bigger picture
Zaharia’s comments tap into three deeper trends.
1. From model‑centric to system‑centric AI.
The first wave of excitement around GPT‑4 and its peers was model‑obsessed: bigger context windows, more parameters, new benchmarks. The current wave is about orchestration – agents that browse, code, transact, and operate tools. OpenClaw is just one emblem of this shift. In that sense, calling AGI “already here” reflects the fact that, when you wrap today’s models in tools and data, they behave generally enough to automate wide swathes of cognitive work.
2. The return of boring infrastructure.
Spark was the poster child of the last hype cycle: big data. Today, Databricks, Snowflake, and the hyperscalers are quietly waging a war to become the default data+AI substrate. The ACM prize reminds us that revolutions are enabled by unsexy engineering: scheduling, storage formats, fault tolerance, lineage. If AI becomes the new operating system for knowledge work, companies like Databricks will be its file systems and process schedulers.
3. Automation of research itself.
Zaharia’s excitement around AI‑powered research – from molecular simulations to interpreting sensor data beyond human perception (radio, microwaves, etc.) – echoes a wider pattern. Over the last years we’ve seen language models draft code, propose experiments, and mine scientific literature. The frontier is no longer just chatbots; it is “self‑driving R&D” where models design, run and interpret experiments with humans in the loop.
Compared to consumer giants like OpenAI, Anthropic, or Google, Databricks sits lower in the stack. But that may prove durable. Models come and go; data platforms, once embedded, linger for decades. Zaharia’s prize and his AGI framing both push the narrative away from the latest model release and toward the long‑term power of whoever owns the data and orchestration layer.
5. The European / regional angle
For Europe, the story here is less about one US founder and more about the kind of AI universe his vision implies.
Zaharia’s claim that AGI already exists – as a diffuse capability stitched from models, tools and data – fits surprisingly well with the EU AI Act’s systems‑oriented approach. The law doesn’t wait for a sci‑fi AGI to emerge; it regulates based on risk, data governance, and use cases. If “general” systems are already quietly embedded in office suites, CRM platforms and research tools, then European regulators are right to focus on context and deployment rather than a mythical AGI threshold.
At the same time, Zaharia’s warning about agents that act like human assistants is a direct challenge to European enterprises, which are both privacy‑sensitive and procurement‑conservative. A browser agent that can read your intranet, access your bank, and click around your systems is a GDPR headache wrapped in a social‑engineering dream.
This opens an opportunity for European vendors – from cloud providers like OVHcloud and Deutsche Telekom to AI players like Mistral AI or Aleph Alpha – to differentiate on controllability and auditability. European CIOs increasingly want three things: data residency, explainable policies, and strong identity controls for agents. Platforms inspired by Zaharia’s data‑first design but built around EU norms could be powerful.
And for Europe’s research institutions – from CERN to national labs and universities – Zaharia’s “AI for research” vision is a wake‑up call: either build or adopt such tooling aggressively, or watch the centre of gravity of scientific discovery drift even further toward US‑centric stacks.
6. Looking ahead
If we take Zaharia seriously, the AGI debate is about to become much less theatrical and much more operational.
Over the next 2–3 years, expect to see:
- Agent stacks professionalise. Today’s browser‑automation assistants are largely consumer toys. The next generation will integrate with corporate identity, policy engines, and audit logs – or they won’t be allowed anywhere near serious data.
- Regulators grapple with “boring AGI”. The most powerful systems may be invisible frontends inside Office suites, CRM tools, IDEs and lab equipment, not sci‑fi robots. Lawmakers will need to decide when a mesh of tools and models counts as a high‑risk AI system.
- Research workflows rewritten. PhD students and engineers will increasingly start from “ask the lab AI assistant” before touching PubMed, ArXiv or Stack Overflow. The real competition won’t be human vs AI, but researcher+AI vs researcher without one.
There are open questions:
- If AGI is “already here” in some sense, who is accountable when it makes general‑purpose mistakes across domains?
- Will enterprises double down on US‑centric platforms like Databricks, or will regulatory pressure and sovereignty concerns push them to European stacks?
- Can we design agent UX that makes it impossible to confuse an AI with a trusted human assistant, without losing usability?
For readers, the practical takeaway is simple: stop thinking about AGI as a calendar date. Start evaluating the systems you already use in terms of how generally they can act on your behalf – and how well they’re constrained.
7. The bottom line
Matei Zaharia’s ACM prize is a well‑deserved nod to the quiet engineering that made the AI boom possible. His provocation that “AGI is here already” isn’t marketing; it’s a reminder that today’s systems are far more general – and more embedded in critical workflows – than our human‑centric vocabulary admits. The question for Europe and the wider tech world is no longer if AGI will arrive, but who will own, regulate, and secure the very general systems that are already running under the hood of our daily work.



