Grammarly’s AI “Expert Review” Faces a Class Action: What Persona-Based Advice Means for AI, Endorsements, and the Law
A new lawsuit targets Grammarly’s short‑lived “Expert Review” feature, which framed writing feedback as if it came from real-world authors and academics. The case could set important guardrails for AI personas, endorsements, and the right to publicity.
Background
Generative AI has created a new business language: assistants, copilots, and experts—synthetic voices that promise judgment, taste, and authority. In the race to make AI feel trustworthy, some companies have leaned on a powerful cue: the illusion of human endorsement. If a tool looks or reads like a known expert is reviewing your work, you’re more likely to believe its feedback.
That trust-building tactic is now colliding with hard questions about consent, attribution, and consumer protection. At the center of the latest dispute is Grammarly, a widely used writing assistant in classrooms, offices, and publishing workflows. The company recently piloted an AI feature called “Expert Review” that framed suggestions as though they came from established authors and academics. After public criticism and legal scrutiny, the feature was shut down. Now, Grammarly faces a putative class action alleging that the product misused people’s names and reputations without permission and misled users about who—if anyone—was actually reviewing their text.
The case is about more than one product misstep. It highlights a broader fault line in AI: When does an “AI persona” become a false endorsement? And how should tools disclose the difference between inspiration, simulation, and a real human’s sign-off?
What happened
- Grammarly rolled out an “Expert Review” capability that presented feedback in a voice framed as belonging to well-known writers and scholars. To the user, critiques and suggestions appeared as if they were personally authored or approved by those named experts.
- According to the complaint, the individuals whose names and personas were invoked did not consent. The suit claims this presentation implied a real-world relationship or endorsement that didn’t exist.
- As questions mounted, Grammarly shut the feature down and said it was reassessing. While product pauses are common in fast-moving AI launches, the legal action shifts the debate from optics to liability.
The result is a test case for where the legal lines fall when AI borrows the aura of authority.
Why this is legally risky
Persona-driven AI touches multiple areas of law. Even without using a person’s face or voice, a product can trigger legal exposure if it trades on their identity or reputation.
- Right of publicity and name/likeness laws
- In many US states (notably California and New York), using someone’s name, image, likeness, or other distinctive persona for commercial gain without permission can violate statutory or common-law publicity rights.
- Courts have found liability not just for literal images, but also for evocative stand-ins—a lookalike, soundalike, or unmistakable persona that signals a specific individual.
- Recent legislative trends strengthen these protections. Tennessee’s 2024 ELVIS Act, for example, expressly protects a person’s voice from unauthorized AI cloning. While voice is one vector, the broader principle is the same: you can’t commercially leverage identity cues without consent.
- False endorsement and false association (Lanham Act)
- Presenting AI feedback as if it came from named experts can be read as an implied endorsement or affiliation. Under Section 43(a) of the Lanham Act, that can be actionable if it’s likely to cause consumer confusion.
- Disclaimers matter, but they have to be clear and conspicuous. If the main UI strongly signals “this person reviewed your work,” a faint footnote that “this is AI” won’t cure confusion.
- Deceptive or unfair trade practices
- State consumer-protection statutes generally prohibit misleading representations that are material to a consumer’s decision to use or pay for a service.
- The US Federal Trade Commission’s Endorsement Guides require that endorsements be genuine and that any material connections or simulations be obvious to typical users—especially where “authority bias” could sway decisions.
- Defamation and false light (edge cases)
- When AI attributes strong judgments to real people, it risks reputational harm: “X said Y about your writing.” If Y is uncharacteristic or damaging, the person may claim they were portrayed in a misleading way—even if the underlying comments are not defamatory on their own.
- Contractual and enterprise risk
- Grammarly is widely used by companies and schools. Institutions often require accurate provenance, especially for anything resembling expert review or certification. Mislabeling could breach procurement standards and erode trust among risk-averse buyers.
The bigger context: AI personas keep getting companies in trouble
Grammarly’s controversy slots into a pattern:
- Celebrity voices and AI clones: Voice actors and artists have objected to tools that mimic their voices without authorization. Some platforms now geofence or block celebrity name prompts to avoid impersonation.
- AI characters with licensed faces: When tech firms create AI “celeb” avatars, they increasingly strike explicit licensing deals and layer on conspicuous disclosures that “this is AI, not the real person.” That’s the compliant route—and it isn’t cheap.
- Deepfake endorsements in scams: Public figures are routinely “endorsing” products they’ve never seen via synthetic ads. Regulators have signaled they will treat this as both consumer deception and a publicity-rights violation.
The lesson is simple: proximity to real reputations is a liability magnet. The closer an AI feels to “the actual person,” the more rigorous the consent, compensation, and disclosure must be.
What makes “Expert Review” uniquely sensitive
- Authority bias: Users treat feedback differently when they believe it comes from a respected critic or professor. That can materially impact user behavior—whether they adopt edits, pay to unlock more feedback, or trust the product’s overall quality.
- Academic integrity: In classrooms and journals, the provenance of critique matters. Framing synthetic commentary as human expert review can distort peer review norms and mislead supervisors or editors.
- The endorsement premium: Companies know that star power increases conversion. If a product presents advice as though a named luminary co-signed it, that confers a marketing benefit. That benefit is what publicity and endorsement laws regulate.
Responsible alternatives that avoid legal traps
AI products can provide high-quality critique without impersonation. Safer design patterns include:
- Build real expert networks: Partner with actual reviewers under contract, compensate them, and label their contributions clearly. If they use AI to assist, disclose the workflow.
- Use role-based, not person-based personas: “A senior copy editor” or “a sociology professor” communicates expertise without pointing to a specific person.
- Default to neutral voice: Offer robust critique in the product’s own branded tone, and attribute advice to the system—not a person.
- Cite sources, not celebrities: For style or grammar advice, link to style guides or corpora. For domain critique, cite textbooks, papers, or institutional resources.
- Strong disclosures: If any simulation remains, display unmistakable, front-and-center disclosures (“Generated by AI. Not reviewed by [real person].”). Avoid footnote-level fine print.
- Guardrails in generation: Hard-block prompts that ask for a named writer’s style or persona. Enforce no-impersonation policies, and audit outputs for leakage of names and signatures.
Key takeaways
- Persona ≠ permission: Styling AI feedback as if it came from a specific individual likely triggers publicity and endorsement laws unless there’s explicit consent and licensing.
- Disclosures must be unmissable: If the UI suggests a real person did the work, small-print caveats usually won’t save it.
- Enterprise trust is at stake: Academic and corporate buyers are especially sensitive to provenance claims. Ambiguity can cost deals and reputations.
- The regulatory tide is turning: From the FTC’s focus on endorsements to state laws protecting voices and likeness, the bar for persona-based AI is rising.
- Safer patterns exist: Neutral voices, role-based personas, licensed experts, and transparent labeling can deliver value without borrowing someone else’s brand.
What to watch next
- Motions to dismiss and class certification: Early court rulings will clarify whether framing AI output as human expert review is plausibly deceptive or a publicity-rights issue on its face.
- Regulator interest: Expect questions from consumer-protection agencies about endorsements in AI UX. Updated guidance may push for standardized “AI-generated” labels in contexts like reviews and advice.
- Platform policies: App stores, ad networks, and enterprise marketplaces may tighten rules around synthetic endorsements and personas, requiring attestation or technical safeguards.
- Licensing markets for expertise: If courts signal that synthetic expert personas are high risk, more companies will pay to license real experts’ names—and display far bolder disclosures.
- Global compliance: The EU’s emerging AI rules emphasize transparency for synthetic content, and some countries are drafting deepfake endorsement restrictions. Tools with global reach will need region-specific UX and disclosures.
FAQ
-
What did Grammarly’s “Expert Review” do?
It presented writing advice as if it came from specific real-world authors or academics. The lawsuit claims those individuals didn’t consent and that users were misled into believing a human expert had reviewed their text. -
Why is that a legal problem?
Using a person’s identity to promote or sell a service without permission can violate the right of publicity. Framing advice as if it’s endorsed by a named expert can also constitute false endorsement under trademark law and deceptive practice under consumer-protection statutes. -
Is it ever okay to use a famous person’s style in AI?
Emulating a general “style” is a grey area, but attributing output to a named person, or creating a persona that the average user would recognize as that person, is where legal risk spikes. Consent and conspicuous labeling are the safest routes. -
Would a disclaimer fix this?
Only if it’s clear, prominent, and consistent with the overall experience. If the primary impression is “this person reviewed your work,” small disclaimers probably won’t cure confusion. -
What should companies building expert-like AI do instead?
Use neutral or role-based voices, cite sources rather than people, partner with real experts under contract when needed, and adopt bright-line “no impersonation” policies. -
What can users and institutions do right now?
Ask vendors how expert feedback is produced and attributed. Require proof of consent for any named endorsements, and mandate prominent AI-generation disclosures in procurement and classroom policies.
Source & original reading: https://www.wired.com/story/grammarly-is-facing-a-class-action-lawsuit-over-its-ai-expert-review-feature/