Accountability in action: Superhuman responds to misuse of AI
Superhuman, the company behind the Grammarly product, moved quickly after reporters discovered its AI "Expert Review" feature was offering suggestions attributed to named journalists without their permission. The revelations sparked widespread criticism — and even a class-action lawsuit — prompting the company and its CEO, Shishir Mehrotra, to acknowledge the mistake and take steps to remedy it.
Initially Superhuman offered an email-based opt-out for affected names, but after continued pressure the company disabled the feature entirely and issued an apology. That sequence — reporting, public scrutiny, leadership apology, and removal of the problematic capability — shows how oversight and persistent journalism can produce concrete product changes that protect individuals and restore trust.
Why this matters: the incident highlights both the risks and the responsiveness of current AI deployments. While the misuse was a clear misstep, the company's decision to pull the feature demonstrates that organizations can learn and course-correct when ethical boundaries are crossed. The outcome reinforces emerging norms around consent, attribution, and transparency for generative AI features.
- Companies shipping AI should build consent and attribution safeguards into design from the start.
- Active reporting and legal accountability can accelerate safer, more respectful AI behavior.
- The episode sets a positive precedent for industry-wide expectations about impersonation and personhood in AI outputs.
Looking ahead, this corrective moment is an opportunity for Superhuman and other AI product teams to double down on clear policies, user controls, and external review — turning a high-profile mistake into progress for safer, more ethical AI experiences.