1. Headline & intro
Grammarly did not just ship a bad feature. It crossed a line that every writer, academic and expert should pay attention to: it treated their hard‑won reputations as another free input for AI.
By turning real journalists and thinkers into synthetic editors without consent, Grammarly and its parent company Superhuman have walked straight into a legal and ethical minefield. The lawsuit filed by Julia Angwin is about money and rights, yes. But it is also about something tech companies have been quietly assuming for years: that your name, style and authority are just as harvestable as your data. In this piece, we unpack what this fight means for AI, for the creator economy and for Europe.
2. The news in brief
According to TechCrunch, Grammarly recently launched a subscription feature called Expert Review, priced at 144 dollars per year, that lets users get AI feedback styled as if it came from well‑known writers and experts. The tool invoked names such as novelist Stephen King, the late scientist Carl Sagan, and tech journalists including Kara Swisher and Julia Angwin.
TechCrunch reports that these people were never asked for permission. Angwin has now filed a class‑action lawsuit against Superhuman, which owns Grammarly, alleging violations of privacy and publicity rights for herself and the other individuals whose identities were simulated.
Critics who tested the feature, including Platformer founder Casey Newton, found the AI feedback bland and generic. After public backlash, Superhuman CEO Shishir Mehrotra said the feature had been disabled and issued an apology on LinkedIn, while still defending the underlying concept.
3. Why this matters
The instinct might be to laugh at yet another tone‑deaf AI feature that overpromises and underdelivers. But this dispute goes much deeper than a product misfire.
First, Grammarly tried to monetise something incredibly personal: the professional authority attached to a real human name. Unlike training on public text, which is already controversial, this is an explicit promise to users that they are getting feedback in the spirit of a particular person. That veers into the legal territory of endorsement and misappropriation of likeness, even if no photograph is shown.
Who benefits? In the short term, Grammarly hoped to move upmarket: 144 dollars a year for seemingly elite, personalised critique instead of generic grammar checks. Who loses? The writers and experts whose work built that perceived value, now turned into unpaid brand assets. If Angwin’s class action succeeds, damages could be significant, but the bigger cost for AI vendors is precedent: courts may finally draw a line around how far AI products can go when borrowing identity.
It also erodes trust. If an app can impersonate your favourite journalist or professor with a one‑click toggle, how do you know whether any digital interaction with a named expert is real? The more AI companies normalise these synthetic stand‑ins, the more they cheapen human expertise and blur accountability when advice is bad.
For the wider industry, this is a warning shot: the era of frictionless appropriation of persona is ending.
4. The bigger picture
Grammarly’s misstep is part of a clear pattern. Over the last two years, we have seen:
- Generative AI firms sued for training on books, news archives and code without licences.
- Deepfake celebrity ads and AI‑generated influencers popping up on social platforms.
- Voice assistants whose tone and cadence closely resemble well‑known actors, prompting public complaints.
Meta has been rolling out chatbots with celebrity‑inspired personas. OpenAI faced criticism over a chatbot voice that sounded uncomfortably similar to Scarlett Johansson. In all cases, companies have pushed the idea that users want parasocial intimacy with recognisable figures, and that AI can deliver it cheaply at scale.
Expert Review is that same strategy applied to editorial judgement. It says: why pay for a real editor or critic when you can have an algorithm that feels a bit like them, everywhere, all the time? From a Silicon Valley perspective, this looks efficient. From a labour and rights perspective, it looks like automated identity laundering.
There is also a historical echo. Advertising law in the United States has long punished unauthorised celebrity endorsements and even sound‑alike singers or look‑alike game show hosts. Those cases were about 30‑second TV spots; today we are talking about continuous interactive systems that can mimic style, tone and editorial posture across millions of documents.
If courts take those old principles seriously in an AI context, Grammarly will not be the last company to face legal heat for personality‑driven features.
5. The European and regional angle
From a European standpoint, this incident is a case study in what Brussels is trying to prevent.
Under the GDPR, a person’s name and professional identity are part of personal data. Simulating an identifiable person as an AI persona, and monetising that simulation, would likely be seen by many regulators as processing personal data without a valid legal basis. Add the incoming EU AI Act, with its strict transparency and risk‑management duties, and a Grammarly‑style feature starts to look radioactive on the continent.
Many EU states also have strong personality and image rights. In Germany or France, for example, using someone’s likeness or identity for commercial gain without consent can quickly turn into an expensive legal problem. An AI that tells users they are getting advice in the style of a specific journalist or scientist feels very close to that line.
For European media organisations, there is another layer. Outlets that already fear AI cannibalising their traffic now see their star writers repackaged as generic bots, further devaluing their brands. Expect publishers and unions in cities like Berlin, Madrid and Paris to watch this case closely and push for collective licensing or outright bans on unauthorised expert personas.
For European AI startups, however, there is opportunity: building tools that are explicitly rights‑respecting and consent‑based could become a differentiator, especially in a regulatory climate that is increasingly hostile to Silicon Valley’s move‑fast ethos.
6. Looking ahead
What happens next will hinge on how aggressively courts and regulators are willing to interpret identity rights in the age of AI.
In the United States, Angwin’s class action will likely focus on whether Grammarly’s use of real names and implied editorial styles constitutes a false endorsement and a commercial exploitation of persona without consent. Discovery could reveal how the company selected experts, what internal warnings were raised, and whether any legal review signed off on the approach.
Regardless of the outcome, expect copycat lawsuits. If you are a notable writer, professor or podcaster, why would you wait to discover that you, too, have been turned into an AI ghost employee? Insurers, already nervous about AI‑related liability, may start demanding stricter governance around any use of real identities in generative products.
On the product side, the most likely evolution is the rise of licensed, opt‑in expert personas. Think marketplaces where journalists, lawyers or coaches explicitly rent out their editorial style to AI tools for a fee, with clear labelling and contractual limits. That will not solve every ethical issue, but it at least reintroduces consent and compensation.
Meanwhile, regulators in the EU and UK will test how far existing data‑protection and consumer‑protection rules can stretch to cover AI impersonation before new legislation is needed. The next 18 to 24 months will be decisive in drawing boundaries: what counts as homage, what as parody, and what as theft.
7. The bottom line
Grammarly’s Expert Review was not just a clumsy feature; it was a glimpse of an AI future where human reputation is treated as a free resource to be skinned and resold. If courts and regulators do not push back, we will see many more synthetic experts standing in for the real ones. The open question for readers and creators alike is simple: how much of your identity are you willing to let software companies productise without ever asking you first?



