Grammarly’s new “Expert Review” feature looks, at first glance, like a dream: feedback on your writing as if it came from great authors and top journalists. Look a bit closer and it turns out there are no experts, no reviews and no permission from the people whose reputations are being borrowed. This isn’t just an embarrassing UX choice. It’s a case study in how generative AI products are quietly weaponising trust — and why regulators, publishers and users should start paying much more attention.
In this piece, we’ll unpack what Grammarly actually launched, why it matters far beyond one writing app, and what it signals about the next regulatory battles around AI in Europe and beyond.
The news in brief
According to TechCrunch, Grammarly expanded its AI-powered capabilities in August 2025 with a feature called Expert Review. Inside the sidebar of its main writing assistant, users can request revision suggestions “from the perspective” of named subject‑matter experts.
As reported by Wired and The Verge, those “experts” include famous authors, public intellectuals and, more controversially, living tech journalists from outlets like The Verge, Wired, Bloomberg and The New York Times. The interface can frame the feedback as if it’s coming from these people directly.
None of the named individuals appear to be involved with the product, nor have they publicly endorsed Grammarly’s use of their names. A Grammarly executive told The Verge that these references are justified because the experts’ published work is public and widely cited. The company’s documentation adds a disclaimer stating that the experts are not affiliated with or endorsing Grammarly.
A historian quoted by Wired argues that, given no experts are actually participating, calling this an “expert review” is essentially misleading.
Why this matters
On paper, Expert Review is just another UX flourish on top of a large language model. In practice, it’s a powerful trust-laundering machine.
Most users don’t read product manuals or footnote-sized disclaimers. What they see is a button labelled “Expert Review” and an interface that talks as if Kara Swisher, Timnit Gebru or a celebrated novelist has personally weighed in on their text. That framing implicitly borrows decades of hard‑earned reputation to make a probabilistic model feel authoritative.
The losers are obvious:
- The experts themselves, whose names become generic style presets, detached from consent, compensation or quality control.
- Newsrooms and publishers, whose brands depend on clear boundaries between their journalism and external products using their star reporters’ personas.
- Users, who may overestimate the reliability and ethics of the feedback because they think it channels recognised human judgement.
This is not the same as a neutral “rewrite this in the style of Hemingway” prompt. Grammarly is not just mimicking style; it is presenting feedback through a named persona and labelling the whole thing an expert review. That moves it closer to impersonation as a design pattern.
At a time when media literacy is already under strain, normalising AI systems that casually cosplay as real people is risky. It blurs the line between advice and endorsement, between inspiration and appropriation. And it shows how willing mainstream AI products are to spend other people’s reputational capital to increase engagement.
The bigger picture
Grammarly’s move sits in a wider industry trend: AI as character, not just tool.
Over the last two years, we’ve seen:
- Voice-cloning services that let you speak “as” a celebrity or even your boss.
- Chatbots with pre‑packaged “personas” — from therapists to historical figures — that speak in the first person.
- Generative models marketed as capturing the “voice” of specific journalists, academics or influencers.
Major platforms have been experimenting with this too. Social and messaging apps add AI “friends” with names and avatars. Foundation model providers pitch domain‑specific agents designed to sound like consultants, lawyers or doctors — even when they are none of those things.
From a product‑manager’s perspective, this makes sense: humans relate to humans, so wrap the model in a human‑shaped interface. From a societal perspective, it’s hazardous. The more natural these personas feel, the harder it is for users to remember that there is no duty of care, no professional license, no accountability behind the curtain.
We’ve been here before in a softer way. Old‑school Clippy tried to be a friendly paperclip; modern AI wants to be your favourite columnist. The leap from mascot to pseudo‑expert is where the ethical stakes rise.
Grammarly’s Expert Review simply pushes this logic to an uncomfortable extreme: it doesn’t just emulate generic expertise, it borrows the identity markers of real people. Doing so without consent is, at the very least, tone‑deaf in 2026, after years of artists, authors and actors protesting “in the style of X” training and deepfake uses of their likeness.
The incident is a warning shot: as AI products race to differentiate themselves, bolting famous names onto generic models will be a tempting shortcut. Expect many more clashes between product growth teams and the people whose reputations they treat as UX assets.
The European / regional angle
For European users and companies, Grammarly’s Expert Review is more than a curiosity from Silicon Valley; it’s a preview of the compliance headaches to come.
Under the EU AI Act, systems that interact with humans must be transparent about their artificial nature and avoid practices that manipulate users or obscure who is responsible. Presenting feedback “from” a real person who has never touched the product is exactly the sort of design choice regulators will scrutinise.
Then there’s GDPR. Writing style can, in some contexts, be personal data. If an AI system is explicitly built to approximate or profile the style of identifiable individuals, questions arise about lawful basis, purpose limitation and data subject rights — especially when those individuals never agreed to have their “voice” turned into a product feature.
The Digital Services Act and EU consumer‑protection rules also matter. Labelling something as an “expert review” when no expert is involved could be read as a misleading commercial practice, particularly if the UI nudges users to trust the output more than they otherwise would.
This opens a door for European alternatives. Tools like DeepL Write, LanguageTool (from Germany) or smaller regional startups can differentiate by aligning tightly with EU norms: explicit consent for any named personas, clear separation between generic AI suggestions and real human editors, and perhaps collective licensing deals with publishers.
For European media houses — from London to Ljubljana and Zagreb — this is a wake‑up call. If you don’t define how your journalists’ names and styles may be used in AI products, someone else will.
Looking ahead
What happens next is fairly predictable.
On the product side, Grammarly will likely dial back the riskier aspects of Expert Review. Expect softer branding (“perspectives inspired by…”), genericised personas (“the investigative journalist”, “the historian”) or, in the best case, opt‑in programmes where real experts are paid and given veto rights over how their names are used.
On the regulatory side, the timing is bad for this kind of experiment. As the AI Act transitions from text to enforcement, national authorities will be looking for test cases. A high‑profile consumer app that markets AI‑generated feedback as if it came from unaffiliated humans is an inviting target for guidance, if not formal action.
Publishers and creator organisations are also likely to react. We’ve already seen lawsuits around training data (Getty vs. Stability AI, the New York Times vs. OpenAI). The next wave may focus less on what models were trained on and more on how models are packaged: names, voices, styles and implied endorsements.
Users, meanwhile, should expect the interface of AI tools to become more legally conservative: more labels, clearer disclaimers, and perhaps a retreat from using real individuals as front‑end decoration.
The open questions are uncomfortable:
- Do individuals have enforceable rights over their “style” or “persona” when copied by a machine?
- How much anthropomorphism is acceptable before an AI system becomes unreasonably deceptive?
- And who gets to decide what counts as an “expert” in the first place?
The bottom line
Grammarly’s Expert Review isn’t a catastrophic scandal, but it is a revealing one. It shows how quickly generative AI products will spend other people’s reputation to make their models feel smarter and safer than they really are. If “expert” no longer means a real person taking real responsibility, the word becomes just another UI gimmick.
The real test for the industry — and for European regulators — is simple: will we insist that AI systems earn trust through performance and transparency, or let them borrow it from people who never agreed to the deal?



