AI language is quietly rewriting who holds power
Terms like “hallucinations,” “agents” and “tokens” sound harmless, even cute. But they now decide billion‑euro budgets, shape regulation and influence whether people trust or fear AI. When the vocabulary is opaque, power tilts toward the handful of companies and insiders who understand it — and can spin it.
TechCrunch’s new glossary of AI terms is more than a reader service; it’s a small intervention in this power imbalance. The interesting question isn’t what each term means (they cover that) but why these words exist, who picks them, and how they quietly steer public debate. That’s what we’ll unpack here.
The news in brief
According to TechCrunch, a group of its AI and startup reporters has published an online glossary explaining many of the most used — and most confusing — concepts in modern AI. The explainer, updated regularly, covers everything from high‑level ideas like artificial general intelligence (AGI) and deep learning to plumbing‑level notions such as tokens, weights, inference and memory caching.
The glossary also tackles industry buzzwords that have leaked into mainstream conversation, including “hallucination” for model errors, “AI agents” for systems that act on users’ behalf, and “RAMageddon” for the current memory‑chip crunch fueled by AI data centers. Each entry gives a concise, layperson‑oriented definition and situates the term within how current AI systems are built and deployed.
TechCrunch frames the project as an evergreen reference that will expand as new techniques and risks emerge, reflecting how quickly AI research and commercial products are evolving.
Why this matters: language is infrastructure
This kind of glossary looks modest, but it touches something fundamental: if you don’t control the words, you don’t control the conversation.
Who benefits?
- General users and workers finally get a map of the terrain. Understanding that tokens are a billing unit as much as a technical unit, or that “fine‑tuning” is basically retraining on narrower data, helps people ask better questions before buying or deploying AI.
- Journalists and policymakers gain a shared baseline. When a vendor pitches an “agentic platform” or “reasoning model,” being able to decode whether that means genuine autonomy or just a scripted workflow is crucial.
- Smaller startups and open‑source projects benefit indirectly. The more buyers understand the basics, the harder it is for incumbents to hide behind mystique and overcharge for relatively standard capabilities.
Who loses?
- Vendors who rely on marketing fog. Once you see that an “AI agent” is often a workflow of API calls plus a language model, it’s harder to justify sky‑high valuations.
- Anyone trying to downplay risks. A neutral definition of “hallucination” reveals that the problem is structural — not a minor bug but a consequence of how current large language models are trained.
The immediate implication is a small but real shift in power from insiders to outsiders. AI has reached the point where it affects contracts, jobs and safety‑critical systems. At that stage, jargon stops being innocent and becomes a kind of soft gatekeeping. A public glossary chips away at that.
There’s also a subtler effect: by curating which concepts matter — AGI, agents, RAMageddon — TechCrunch is helping define which stories about AI feel important. That interpretive layer is as influential as the definitions themselves.
The bigger picture: a battle of narratives, not just models
TechCrunch’s glossary sits inside a broader trend: the politics of AI framing.
Over the past two years, we’ve seen:
- National bodies like NIST in the U.S. and the OECD publish their own AI glossaries and risk frameworks.
- Model providers (OpenAI, Anthropic, Google, Meta, Mistral) push carefully crafted terms in their blogs and model cards — “alignment,” “safety levels,” “frontier models.”
- Advocacy groups deliberately avoid some of that language, preferring plainer words like “deception,” “surveillance,” or “labor automation.”
Glossaries sound boring, but they lock in assumptions. Take a few examples from the TechCrunch selection:
- “Hallucination”: a metaphor that suggests a quirky human‑like episode, not a systematic fabrication problem in probabilistic text generators. Many researchers now argue we should say “fabrication” or “confabulation” instead.
- “AGI”: TechCrunch highlights that even leading labs disagree on the definition. That single admission undercuts a powerful hype driver: the idea that we are racing toward a clearly defined destination.
- “Agents”: Framing complex orchestration systems as “agents” nudges us to see them as semi‑independent actors, which in turn supports certain product visions (automated employees) and regulatory concerns (autonomous decision‑making).
Historically, we’ve been here before. Early Internet discourse revolved around mystical ideas of “cyberspace”; early crypto spoke of “coins” and “mining,” importing physical metaphors that were anything but neutral. Those framings shaped which business models and regulations felt natural.
AI is now going through its vocabulary‑solidification phase. Media‑driven glossaries like TechCrunch’s will sit alongside government and corporate ones. Over the next few years, expect clashes: is a system “assistive” or “surveillance‑enabling”? Is it a “co‑pilot” or a “management tool”? The answer often dictates whether workers or executives get the upper hand.
The European angle: literacy as a precondition for regulation
For European readers, this isn’t just about understanding Silicon Valley jargon; it’s about being able to read your own laws.
The EU AI Act, the Digital Services Act (DSA) and the Digital Markets Act (DMA) are thick with terminology: “general‑purpose AI systems,” “high‑risk AI,” “deployers,” “foundation models.” None of these map one‑to‑one onto industry buzzwords. If citizens, SMEs and even local regulators don’t grasp the basics of things like training, inference, fine‑tuning or transfer learning, enforcement will skew in favor of the best‑resourced players.
Most glossaries, including TechCrunch’s, are written in English and strongly shaped by U.S. industry practice. That creates two challenges in Europe:
- Language gap: many key AI terms do not yet have stable, intuitive equivalents in smaller European languages. That vacuum invites poor translations or direct imports from English, which can obscure meaning.
- Cultural gap: Europe’s policy debate is far more privacy‑ and worker‑centric than that of the U.S. A neutral‑sounding term like “AI agent” can imply very different things in a German works council discussion than in a San Francisco startup pitch.
European institutions are starting to respond with their own glossaries and guidelines, but media efforts still matter. When an outlet like TechCrunch normalizes precise definitions of “tokens,” “compute” or “RAMageddon,” it indirectly helps European CIOs and policymakers evaluate vendor claims — especially when those claims collide with GDPR obligations, the AI Act’s transparency rules or sector‑specific laws in health, finance and transport.
Ultimately, AI literacy is becoming a civic skill in Europe, not just a tech hobby. Glossaries are the grammar books of that new language.
Looking ahead: glossary wars and shifting meanings
What happens next is not just more terms, but more contested terms.
Expect three dynamics:
Standardisation vs. spin
Standards bodies, regulators and academic consortia will push toward formal definitions — for example, what legally counts as a “general‑purpose AI model” or a “high‑risk system.” At the same time, vendors will keep coining softer labels for products that might otherwise fall under stricter rules.Renaming to manage risk perception
Today we say “hallucinations”; tomorrow, under regulatory pressure, documentation may switch to “unverified outputs” or “residual model error.” Don’t be surprised if some companies quietly update docs without changing architectures. Tracking how TechCrunch and others revise their glossary over time will be a useful tell.Deeper technical concepts entering mainstream debate
Terms that feel niche today — “distillation,” “KV cache,” “reasoning tokens” — are economically important. They influence cloud bills, latency, and who can afford to run models at the edge in Europe. As enterprises scale AI deployments, expect CFOs and procurement teams to start speaking this language.
For readers, the opportunity is clear: spend a little time with a good glossary now and you’ll be able to cut through a lot of nonsense later — in sales meetings, public consultations, or internal AI strategy discussions.
The open questions are more political than technical: Who will become the de facto authority on AI vocabulary — big tech, regulators, standards bodies, or an ecosystem of independent media and civil‑society projects? And how do we ensure that low‑resource European languages don’t get left behind in this conceptual race?
The bottom line
A regularly updated AI glossary from a major tech publication is a small but meaningful step toward rebalancing power in the AI debate. It won’t solve hallucinations or hardware shortages, but it does arm non‑experts with the conceptual tools to ask harder questions. The next time a company pitches you an “agentic AGI‑ready copilot,” will you nod politely — or will you have the vocabulary to push back?



