
In this tense and revealing interview, Nilay Patel confronts Shishir Mehrotra, CEO of Superhuman (formerly Grammarly), regarding a controversial feature that used the names and "expertise" of real journalists without permission. They debate the ethical boundaries of AI impersonation, the legal "merit" of attribution versus likeness, and whether the future of the creator economy is a bright expansion or an extractive dead end.
The conversation begins with the elephant in the room: a recently shuttered Grammarly feature called Expert Review. This tool claimed to provide writing suggestions "inspired by" famous experts, including Nilay Patel himself and other prominent journalists like Julia Angwin. These names appeared with checkmarks, implying an official endorsement that never existed. Nilay wasted no time addressing the lack of permission and the "hallucinated" nature of the advice provided under his name.
Shishir Mehrotra offered a public apology, admitting that the feature was a "bad feature" that didn't align with the company's long-term strategy. He explained that a small team of a product manager and engineers built it to satisfy a user desire: the wish to have an idol or mentor "sitting next to them" while they work. However, the execution fell short of both user and expert expectations.
"It deeply pained me to feel that we under-delivered for them and I'd really like to apologize for that. That was not our intention."
A major point of friction in the interview is the definition of impersonation. Shishir argues that the feature provided "standard attribution" because it linked to the experts' work and stated the advice was "inspired by" them. Nilay, however, counters that putting a name on a generated paragraph that the person never wrote—and would never agree with—is fundamentally deceptive.
Shishir maintains that as long as the AI is "referring" to public work and attributing it, it follows the "human contract" of the internet. Nilay argues that this "attribution" is a hollow defense when the AI makes up content from scratch.
"We should not be able to impersonate you. Period. We did not. If we use your work, if any LLM product or any product at all use your work, they should attribute you."
"This wasn't an attribution. You just made something up and put my name on it... It's not something I ever would say."
The tension escalates when discussing the class-action lawsuit filed by Julia Angwin. Shishir claims the lawsuit is "without merit" because the feature included disclosures, while Nilay points out that state laws in New York and California specifically bar using a person's identity for commercial purposes without consent.
Shishir brings his experience as the former Chief Product Officer at YouTube to the table, drawing parallels between the early legal battles YouTube faced (like the Viacom case) and the current AI landscape. He argues that while the law provides a "minimal standard," the real goal is to build a platform where creators choose to participate because it offers a sustainable business model.
He envisions a future where experts like Nilay can build their own AI agents on platforms like Superhuman Go, using a 70/30 revenue split model similar to the App Store or YouTube. In this vision, instead of AI "stealing" work, it becomes a new distribution channel for an expert's "style" and "judgment."
"Our main goal is to build a platform a lot like YouTube. You should choose to be on our platform... and when you choose your business model, you should get paid for your contributions to it."
Nilay cites a 2026 poll showing that public perception of AI is lower than that of ICE (Immigration and Customs Enforcement). He argues this is because AI feels purely extractive—ingesting everyone's work to eventually replace their jobs without compensation. Shishir disagrees with the "extractive" label, suggesting that people are primarily afraid of job displacement, which he believes is a misunderstanding of the technology's potential to "expand" human work. 🤖
The two discuss the "bits to atoms" pivot, where creators like MrBeast or the Paul brothers have to sell physical goods (water, chocolate) because the value of digital content (bits) has been devalued to zero. Nilay fears AI is the final nail in the coffin for the value of creative labor.
"AI can't replace human creativity, empathy, or emotion... but in the AI era, taste and judgment are more valuable than ever."
As the interview winds down, they touch upon the "SaaS (Software as a Service) Apocalypse." With AI making it easier to "vibe code" (creating software through natural language prompts), Nilay asks why anyone would pay for a subscription like Grammarly when they could just build their own tool using raw AI tokens.
Shishir argues that software is about more than just the code; it's about network effects and doing a specific job well within a group. He believes that even if building software becomes a commodity, the platforms that integrate those tools into a seamless workflow (like Superhuman) will remain essential. 🚀
The interview concludes with a lingering disagreement on whether AI can truly replicate "taste." While Shishir is optimistic that creators can train agents to represent their methodology and build deeper connections with fans, Nilay remains skeptical that an LLM (Large Language Model) can capture the nuance of a specific creative voice. ✍️
Ultimately, the conversation highlights a massive divide between the tech leaders building AI platforms and the creators whose data fuels them. While Superhuman has retreated from its "Expert Review" feature, the battle over who owns an expert's name, likeness, and "vibe" in an AI-driven world is only just beginning.
Get instant summaries with Harvest