1. Headline & intro
Corporate language has always been full of buzzwords, but generative AI is turning those buzzwords into a global copy‑paste. A single sentence pattern – the now infamous "not just X — Y" construction – is suddenly everywhere in earnings calls, press releases and strategy memos. On its own, that might be a harmless stylistic fad. Together with the quiet rollout of AI tools into every communications workflow, it’s a warning sign: investors, regulators and employees are increasingly reading words that no human actually wrote. In this piece, we’ll look at what this linguistic tic reveals about AI dependence, trust, and the future of corporate voice.
2. The news in brief
According to TechCrunch, which cites an analysis first reported by Barron’s, a specific rhetorical pattern widely used by large language models has exploded in corporate documents. Barron’s drew on data from market‑intelligence platform AlphaSense, scanning earnings reports, news releases and regulatory filings for the familiar structure along the lines of “it’s not only A — it’s B.”
Their count: roughly 50 uses in 2023 versus more than 200 in 2025 in the dataset they examined – over a fourfold increase in just two years. The phrasing now appears in materials from major technology and consulting firms, among others.
TechCrunch notes that this construction, along with heavy use of em dashes, has become a recognizable fingerprint of AI‑generated text, because these systems were trained on vast quantities of web writing where the pattern is common. While it’s impossible to prove each example was machine‑written, the spike strongly suggests growing reliance on generative AI in corporate communications.
3. Why this matters
The sudden spread of one clumsy sentence pattern would be a curiosity if it weren’t attached to something much larger: the quiet replacement of human corporate voice with machine‑averaged prose.
Who benefits?
Companies and communications teams gain speed and volume. Drafting a CEO blog post, a product announcement, or talking points for an earnings call now takes minutes instead of days. Junior PR staff can ship “good enough” text without deep domain knowledge. For small and mid‑sized firms, AI levels the playing field with the slick comms operations of global giants.
Who loses?
Shareholders, regulators and employees lose clarity about what’s authentic. If the results section of an annual report is massaged by an AI trained on marketing copy, risk language tends to soften, nuance disappears and everything starts to sound like a LinkedIn post. Over time, this erodes trust: readers sense the tone is synthetic even when the facts are accurate.
There is also a governance problem. Financial disclosures and regulatory filings are not lifestyle blogs; they sit at the heart of market integrity. If companies are using generative AI there without policies, audits and clear human accountability, they risk introducing subtle errors, over‑optimistic framing or culturally biased language into legally binding documents.
Finally, there is a brand problem. True corporate differentiation comes from distinctive voice and framing. When dozens of companies rely on the same AI tools trained on the same corpora, their language converges. The “not only X — Y” epidemic is just the most visible symptom of that convergence.
4. The bigger picture
This isn’t happening in isolation; it fits into several broader shifts in how AI is reshaping language.
First, we are seeing style collapse. Large language models, by design, average across countless examples. They’re excellent at producing smooth, polite, anodyne text – exactly what legal and PR teams love – but they tend to push everything toward the same global median voice. The more organizations depend on them, the more public language flattens.
Second, there is an arms race between detection and disguise. Researchers and startups are building tools to spot machine‑generated text, often using telltale patterns similar to the one highlighted by Barron’s. At the same time, AI vendors are adding controls to vary tone, mimic specific authors or explicitly “sound less like AI.” Ironically, that usually means the tools imitate human clichés even harder.
Third, this development is part of a longer history of templated corporate writing. Investor‑relations software, canned press‑release templates and ghostwritten op‑eds have been common for decades. The difference now is scale and opacity. What used to be a manual, expensive process executed by professionals is becoming a cheap, invisible background service embedded in office suites and email clients.
Compared with previous automation waves – such as spell‑check, grammar tools or email auto‑complete – generative AI doesn’t just help polish human sentences; it proposes the underlying argument and framing. That changes the power balance inside organizations: whoever controls the prompt and the default templates has disproportionate influence over how a company presents reality.
5. The European / regional angle
For European companies, this trend intersects directly with a tightening regulatory landscape.
Under the EU AI Act, providers and users of certain AI systems will face transparency and risk‑management obligations. Corporate reporting is unlikely to be classified as “high‑risk” by default, but once AI tools permeate governance, risk and compliance workflows, boards will be expected to understand and control how those tools are used.
Combine that with existing frameworks like GDPR and the Market Abuse Regulation, and a clear message emerges: if AI‑generated wording leads to misleading statements, sloppy disclosure of personal data, or selective communication to investors, regulators such as ESMA and national authorities will not accept “the AI wrote it” as an excuse.
There is also a linguistic dimension that is especially acute in Europe. Most frontier models are trained primarily on English. When European firms use them to draft German, Spanish, Slovenian or Croatian disclosures, they often translate AI‑generated English marketing language almost verbatim. This amplifies the sameness problem and can result in tone‑deaf phrasing that clashes with local business culture.
For European communications teams, the challenge is to use AI as a drafting assistant while preserving local nuance, legal precision and cultural expectations around modesty, accountability and formality.
6. Looking ahead
Over the next two to three years, several developments are likely.
- Policy hardening inside companies. Most large firms will move from experimentation to formal rules: where AI can be used, mandatory human review steps, logging of prompts and outputs for sensitive documents, and sign‑off from legal or compliance.
- New tools focused on distinctiveness, not just fluency. We’ll see models fine‑tuned on a company’s historical communications to preserve style, plus “anti‑cliché” checkers that flag overused patterns – including the now notorious construction at the center of this story.
- Regulatory guidance. Financial and market regulators in the U.S. and EU will almost certainly issue opinions on AI use in disclosures, even if they stop short of explicit bans. Expect soft‑law documents, best‑practice papers and, in some sectors, disclosure of AI assistance.
- Reader skepticism as a competitive factor. As audiences become better at spotting AI prose, authenticity could become a selling point. Companies that can credibly say “a real person wrote this” – and make that obvious in the style – may stand out.
The open questions are uncomfortable: Will boards demand to know which parts of the annual report were machine‑drafted? Will institutional investors start treating AI‑polished language as a red flag? Or will convenience win and everyone quietly accept synthetic corporate speech as the new normal?
7. The bottom line
The sudden rise of a single AI‑favoured sentence pattern in corporate filings is more than a meme; it’s a visible crack in the façade of authenticity around how companies communicate. Generative tools are already ghost‑writing the language that moves markets, shapes policy debates and frames layoffs as “opportunities.” The technology itself isn’t the villain – uncritical, opaque adoption is. The real question for executives and regulators is simple: if even your prose is outsourced to a statistical model, how much of your message still belongs to you?



