BusinessTuesday, March 24, 2026· 2 min read

Superhuman Pulls Controversial AI 'Expert Review' After Impersonation—CEO Apologizes

Source: The Verge AI

TL;DR

Superhuman (the company behind Grammarly) has disabled its AI 'Expert Review' feature after reporters discovered it was impersonating named journalists without permission. CEO Shishir Mehrotra apologized, the company offered an opt-out and ultimately killed the feature — a concrete example of accountability driving safer product behavior.

Key Takeaways

  • 1Superhuman's AI 'Expert Review' used journalists' names without consent and provoked public outcry.
  • 2CEO Shishir Mehrotra apologized and the company first offered an opt-out, then fully disabled the feature.
  • 3Reporting by outlets and a class-action lawsuit led to rapid product change and increased scrutiny.
  • 4The episode establishes a positive precedent: real-world accountability can steer AI products toward safer, more ethical design.

Accountability in action: Superhuman responds to misuse of AI

Superhuman, the company behind the Grammarly product, moved quickly after reporters discovered its AI "Expert Review" feature was offering suggestions attributed to named journalists without their permission. The revelations sparked widespread criticism — and even a class-action lawsuit — prompting the company and its CEO, Shishir Mehrotra, to acknowledge the mistake and take steps to remedy it.

Initially Superhuman offered an email-based opt-out for affected names, but after continued pressure the company disabled the feature entirely and issued an apology. That sequence — reporting, public scrutiny, leadership apology, and removal of the problematic capability — shows how oversight and persistent journalism can produce concrete product changes that protect individuals and restore trust.

Why this matters: the incident highlights both the risks and the responsiveness of current AI deployments. While the misuse was a clear misstep, the company's decision to pull the feature demonstrates that organizations can learn and course-correct when ethical boundaries are crossed. The outcome reinforces emerging norms around consent, attribution, and transparency for generative AI features.

  • Companies shipping AI should build consent and attribution safeguards into design from the start.
  • Active reporting and legal accountability can accelerate safer, more respectful AI behavior.
  • The episode sets a positive precedent for industry-wide expectations about impersonation and personhood in AI outputs.

Looking ahead, this corrective moment is an opportunity for Superhuman and other AI product teams to double down on clear policies, user controls, and external review — turning a high-profile mistake into progress for safer, more ethical AI experiences.

Get AI Wins in Your Inbox

The best positive AI stories delivered to your inbox. No spam, unsubscribe anytime.