weird-tech
3/5/2026

Grammarly’s New “Expert” AI Reviews Claim Feedback From Famous Authors—Even the Dead

Grammarly has rolled out an AI feature that emulates feedback from marquee authors—living and deceased—via a tool from a company now called Superhuman. It’s a clever product hook with thorny implications: consent, right of publicity, false endorsement, and the cultural cost of bottling literary voices into chatty bots.

Background

For more than a decade, Grammarly has sat in the browser margins of millions of writers—nudging commas, taming run-on sentences, and suggesting cleaner structures. As generative AI heated up, Grammarly pitched beyond spelling and grammar toward full-on drafting, rewriting, tone-shifting, and research assistance. The product’s evolution reflects a broader shift: writing tools are no longer passive. They opine, revise, and sometimes invent.

The next escalation is style. If large language models can imitate Hemingway’s brevity, Didion’s coolness, or Baldwin’s moral clarity, then why not invite those “voices” to critique your draft? It’s a seductive idea: personalized workshop notes from the greats. It’s also a legal and ethical briar patch.

In that context, Grammarly is now marketing an AI feature that delivers “expert” reviews styled after renowned authors—some alive, many not—using a tool built by a company that recently rebranded under the name Superhuman. The pitch: get higher-quality feedback, not just generic grammar nudges, with comments riffing in the spirit of a marquee writer.

That premise triggers urgent questions:

  • Did the authors (or their estates) consent to the use of their names or stylistic signatures?
  • Is this protected fair use, a violation of publicity rights, or false endorsement under trademark law?
  • What does it mean culturally to automate the authority of the dead?

What happened

Grammarly began offering an AI review experience that lets users seek critique “from” well-known writers via an engine developed by an external company now called Superhuman. Rather than simply proposing neutral edits, the feature frames comments and suggestions as if they originate from specific literary figures—both living authors and those long deceased. According to reporting, neither the living authors nor the estates of the deceased were asked for permission.

Functionally, this looks like a familiar LLM pattern: a base model is prompted and/or lightly tuned to respond in the style of a named person. On the surface, it’s a UX flourish. Under the hood, it is a bundle of contentious claims:

  • Using a real person’s name as a selector and a marketing asset
  • Evoking a person’s unique voice and critical posture as a product feature
  • Providing editorial judgments under that persona-like framing

The company behind the engine has recently adopted the name Superhuman (not to be confused with the long-standing email client). However the back-end is arranged, the frontline experience places Grammarly at the center of the controversy because it’s the distribution channel—and the brand millions of writers encounter daily.

Predictably, this triggered debate across creative and legal circles. Even before formal lawsuits (if any) emerge, the pressure points are clear: consent, compensation, reputation, and consumer confusion.

Why it’s controversial

1) Consent and right of publicity

In many U.S. states, individuals have a “right of publicity” that covers their name, likeness, and sometimes voice. Several states, including California and New York, extend these rights in different ways after death (California’s post-mortem right of publicity lasts 70 years). While literary “style” is murkier than a face or a voiceprint, advertising a feature that offers feedback “from” a real, named author looks less like abstract style study and more like commercial association.

  • For living authors, the feature risks implying endorsement, collaboration, or approval they never gave.
  • For deceased authors, estates may argue that the product trades on legacy and brand value without license—especially if the name is used prominently in marketing copy or UI selectors.

2) False endorsement and trademark confusion

Even if a company claims fair use for training, separate legal questions arise when it uses a person’s name in commerce. Under the Lanham Act, brands can get in trouble if marketing causes confusion about sponsorship or endorsement. An interface that lets you “get feedback from [Famous Author]” can be read as an endorsement—particularly when shown in proximity to payment prompts.

3) Copyright and the line between influence and imitation

Copyright doesn’t protect ideas, themes, or generic style. But it does protect original expression. When a model responds in a way that closely mirrors signature phrasings or critical dicta from specific works, there’s a risk of derivative copying—even if subtle. The ongoing court fights over training data (The New York Times v. OpenAI, multiple Authors Guild-led actions, and suits by individual writers and comedians) underscore that courts are still clarifying boundaries.

4) Moral rights and cultural stewardship

Outside the U.S., especially in parts of Europe, authors enjoy stronger “moral rights,” including the right to protect the integrity of their work and guard against derogatory treatment. Even where not legally enforceable, there’s a cultural expectation that an author’s name shouldn’t be slapped on outputs they never sanctioned.

There’s also a philosophical discomfort: we elevate the dead into algorithmic puppets, reanimating their “voice” to comment on modern topics they never confronted. That can easily distort legacies.

5) Consumer trust and the “zombie authority” problem

A key UX risk is the halo effect. Feedback that appears under a revered name will carry extra weight, no matter how generic the underlying model. Students, journalists, and marketers may over-index on that perceived authority, smoothing out dissenting voices and homogenizing style. Worse, users could cite “Hemingway liked my draft” in professional or academic contexts, undermining the meaning of feedback and peer review.

What’s different this time

We’ve seen style imitation utilities before: “write like X,” lyric generators channeling pop stars, and voice-cloning tools that parrot celebrities. What makes this case stand out is the packaging:

  • It’s framed as expert critique, not just stylistic pastiche.
  • It’s routed through a widely adopted productivity app, not a niche novelty website.
  • It explicitly trades on the cachet of real, named authors—no euphemisms, no “classics mode,” no anonymized archetypes.

That combination increases the chances of legal challenge and reputational blowback. It also sharpens the policy conversation: if this is allowed, where do we draw the line for teachers, journalists, scientists, or public intellectuals?

The business calculus

Why ship something so provocative?

  • Differentiation in a crowded market: With Microsoft, Google, Notion, and countless startups bundling AI writing, “get notes from your literary hero” is a memorable hook.
  • Perceived value: Users may tolerate monthly fees if the product delivers something that feels bespoke and elite.
  • Engagement loop: Persona-driven comments are sticky. People enjoy seeing how “Hemingway” or “[Beloved Living Writer]” would mark up their prose; it’s a gamified workshop.

But the risks are real:

  • Legal exposure across multiple doctrines (publicity, trademark, unfair competition, consumer protection, and, in some jurisdictions, moral rights)
  • Backlash from authors, unions, and publishers; calls for boycotts or opt-out registries
  • Regulatory scrutiny under emerging AI rules that require transparency about synthetic personas

How this intersects with current law and policy

  • Fair use for training data: U.S. courts haven’t settled whether scraping books to train LLMs is lawful at scale. Even if training were deemed fair use, that doesn’t automatically bless using a person’s name to sell outputs.
  • Deepfake and voice laws: States are moving fast on AI impersonation. Tennessee’s ELVIS Act targets unauthorized voice cloning; other states are drafting similar bills. While textual style isn’t voice, statutes are broadening, and legislators could expand coverage to written personas.
  • EU AI Act: The forthcoming regime will require clear disclosure when content is generated by AI and impose stricter rules on systems that manipulate or impersonate individuals. A feature that suggests feedback “from” a real person will face tougher transparency demands in Europe.
  • Platform policies: App stores, ad networks, and payment providers increasingly police impersonation and misleading claims. If the UI materially confuses users about endorsement, distribution partners may step in even before courts do.

Ethical alternatives companies could adopt

  • License or collaborate with living authors. If you want a named persona, get permission and pay.
  • Use archetypes, not names. “The Minimalist Stylist,” “The Precision Editor,” or “The Lyrical Critic” can capture feedback patterns without appropriating identity.
  • Offer opt-outs for estates and public figures, with a public registry.
  • Make disclaimers unmissable in-product, not buried in footers. Clarify that no living author reviewed or endorses the output.
  • Provide provenance. Show when the model is using generalized patterns vs. quoting or paraphrasing identifiable works.
  • Support creators with revenue shares or funds tied to usage of particular personas or styles.

What this means for writers and educators

  • Treat persona-stamped comments as flavored heuristics, not gospel. Ask whether the advice serves your purpose and audience.
  • Keep process notes. If publishing professionally or academically, disclose if you used AI feedback, especially when framed as coming from a real person.
  • Beware homogenization. Leaning too hard on canonical “voices” can flatten originality. Use these tools as prompts, not templates.
  • Educators should update policy language. Clarify whether AI feedback is allowed and how to cite it. Consider teaching assignments that require reflective commentary on AI-generated critiques.

Key takeaways

  • Grammarly has introduced an AI review experience, powered by a tool from a company now called Superhuman, that frames feedback as if it’s coming from famous authors, both living and deceased.
  • The feature appears to use author names and stylistic signatures without permission, raising issues around right of publicity, false endorsement, copyright, and moral rights.
  • Packaging imitation as “expert” critique and distributing it through a mainstream writing assistant raises the stakes far beyond novelty style generators.
  • There are viable alternatives—licensing, archetypes, clear disclosures, and creator revenue-sharing—that could retain the educational value without appropriating identities.
  • Expect legal challenges, policy scrutiny, and louder demands for consent frameworks in generative AI.

What to watch next

  • Legal actions or demand letters from living authors or estates challenging the use of their names and personas.
  • Policy updates from Grammarly or Superhuman: opt-outs, renamed personas, or licensing announcements.
  • Regulatory movement: state-level impersonation statutes expanding beyond voice and likeness; EU guidance on AI-generated impersonation and disclosure.
  • Platform enforcement: app stores or enterprise security teams pushing back on features that could be construed as misleading endorsement.
  • Industry response: competitors choosing safer archetype-based UX or striking paid deals with specific authors.

FAQ

Does this mean AI was trained on full texts by these authors?

Possibly, but the exact training mix isn’t disclosed. Large models are typically trained on vast corpora that likely include public-domain works and, in some cases, copyrighted texts scraped from the web or licensed sources. Emulation can also be achieved through prompting without direct fine-tuning on a single author.

Is it legal to imitate a writing style?

Imitating style isn’t per se illegal. The problems arise when a product markets that imitation under a real person’s name, which may trigger right-of-publicity or false-endorsement claims, and when outputs veer into derivative copying of protected expression.

If the authors are dead, is that safer?

Not necessarily. Several jurisdictions recognize post-mortem publicity rights for decades. Estates manage licensing for commercial uses of names and personas, and they may challenge unauthorized uses—especially in marketing.

Would a disclaimer fix this?

Clear disclosures help but may not cure all issues. If the overall presentation causes consumers to believe a living author endorses the product or participated in making the feedback, disclaimers might be insufficient under false-endorsement rules.

Could this be educationally valuable?

Yes—framed carefully. Archetypal feedback modes inspired by schools of writing can teach technique without appropriating identities. Pairing AI critiques with citations, counterpoints, and human oversight can improve learning while preserving integrity.

What should companies building similar features do now?

  • Seek legal counsel across publicity, trademark, consumer protection, and copyright.
  • Pilot with archetypes; license named personas selectively and transparently.
  • Build consent and opt-out mechanisms, publish usage logs, and share revenue with creators where feasible.
  • Ship conspicuous disclosures and provenance signals within the UI.

Bottom line

Turning literary luminaries into chatty AI reviewers is the kind of product idea that delights a demo and detonates a trust minefield. The convenience is real: crisp, actionable line edits that speak in a memorable voice. But names aren’t just labels; they’re livelihoods, legacies, and legal rights. Until the industry builds sturdier consent frameworks—and courts clarify the limits—treating real authors, living or dead, as plug-in personas is a shortcut laden with risk.

Source & original reading: https://www.wired.com/story/grammarly-is-offering-expert-ai-reviews-from-your-favorite-authors-dead-or-alive/