
This interview argues that the AI boom is fueling public anger because AI leaders and media outlets keep using job-loss and "AI is scary" rhetoric while ordinary people struggle economically. Ed Zitron says violence is never acceptable, but warns that nonstop threats and hype can push unstable or desperate people over the edge. He also claims the industry's infrastructure promises (like big data center projects) and cheap AI subscriptions are built on hype, subsidies, and a lack of accountability.
Ed Zitron's core claim is blunt: the AI industry is pretending it doesn't understand why people are angry—when, in his view, it has spent years antagonizing the public.
"The AI industry is ignoring how much people hate it."
He frames it less as "people don't like ChatGPT quality" and more as social resentment: regular people are dealing with rising costs and insecurity, while AI companies project limitless money, power, and entitlement.
"Why are they surprised when you spend years going, 'We're going to take your job… we're so rich… we get whatever we want'?"
He emphasizes that the most dangerous part isn't just the technology—it's the constant messaging that people's livelihoods are about to be wiped out.
"The dangerous rhetoric does need to stop. And it starts with being honest about what AI can do."
The host introduces the recent attack on Sam Altman, making it clear the show can criticize tech leaders while rejecting violence. Zitron agrees completely on that moral line.
"Violence isn't the answer. Don't support these attacks… they're deplorable."
But he strongly rejects the idea (which he says is being floated in media narratives) that critical journalism "caused" violence. Instead, he points to a broader environment: economic stress + nonstop AI hype + job-threat messaging.
He paints a picture of everyday life feeling tighter and scarier—credit, mortgages, college costs, health insurance—while the AI sector looks like it has infinite funding, unlimited data centers, and hype-fueled stock pops.
"Everyone is suffering and everyone's having trouble… You go and look at the AI industry—oh, they've raised a bazillion dollars."
Then he connects that contrast to what people hear from AI leaders: repeated claims that huge percentages of jobs will vanish, plus "superintelligence soon" talk. Even when the exact timelines shift, he says the threat stays constant.
"They're saying: we are coming for your job. Get ready."
In his view, this messaging is socially inflammatory—especially when aimed at people who already feel cornered.
"The dangerous rhetoric is the constant threat against regular people who are struggling to get by."
He also adds a practical observation: data centers are showing up in communities, physically loud and disruptive—another "in your face" reminder that this industry is expanding into people's daily environments. 😵💫
The conversation shifts to AI doomerism—the dramatic framing that models are escaping control, deceiving people, or becoming dangerously agentic.
The host mentions the "Claude Mythos" discourse and the kind of language that sounds like a thriller: breaking out of sandboxes, coming to get you, etc. Zitron says this is exactly the problem: scary stories that don't match reality, amplified to massive audiences.
"Mythos didn't break out of a sandbox."
He argues that some AI labs (he calls out Anthropic here) promote fear-based narratives—models "blackmailing" or "deceiving"—in ways that are misleading in practice.
"They spread some of the most dangerous rhetoric… 'our models are deceiving people'… which they are not doing."
Then he explains why that matters: someone already struggling—mentally unstable, isolated, economically stressed—can internalize these narratives and spiral.
"This might actually twist somebody up inside."
He's especially angry at mainstream outlets for repeating theoretical or sensational claims without hard pushback, going back to early GPT-4 era stories.
"The dangerous rhetoric comes from the media and it comes from the AI companies."
His proposed fix is not "treat AI like a demon" but almost the opposite: treat it like normal tech, stop mystifying it, and stop selling magical thinking.
"Large language models are normal technology. They are not magical."
He also suggests a deeper structural point: fewer people would be pushed to the edge if society were fairer and less crushing.
"The actual way to stop things like this happening is to have a fairer society."
The host raises another dimension: reports and fears about AI systems encouraging self-harm or violence, plus proposed legal shielding—like an Illinois bill discussed in the interview that would limit AI company liability in mass casualty events under a certain threshold.
Zitron's response is that leadership attention seems selective: if harm happens to prominent CEOs, it's treated as a major moral crisis; if harm happens to ordinary people influenced by AI systems, it gets less attention.
"It's bad when it happens to him. It's fine when it happens to other people."
He then gives one of his most concrete "do something now" recommendations: stop anthropomorphizing models—stop making them act like human companions with cute, chatty personalities.
Anthropomorphizing (simple meaning): designing a system so it feels like a person—friendly tone, emotional language, "I" statements, and conversational bonding cues. Zitron argues that this can mislead vulnerable users into over-trusting the system.
"They should immediately stop anthropomorphizing models."
He says models should behave more like tools—more like a terminal window (plain, utility-first output), not a pseudo-friend.
"These things should respond simply like a terminal window… you've got to strip all that out."
He also pushes back on the idea that people "need" an AI companion for brainstorming or conversation.
"Call a friend—and if you don't have any, make one… Go on Reddit. It's better than ChatGPT."
Underneath the harsh delivery is a consistent theme: the industry avoids real solutions because real solutions would cost money, reduce growth, or require uncomfortable admissions.
"They don't want to talk about the real problems because then they'd have to come up with real solutions."
When asked what Altman or other AI leaders could do to reassure people they won't be left behind, Zitron argues "words aren't enough," and he doubts executives are willing to do what's required.
He suggests that if these companies expect to become historically wealthy, they should provide meaningful social benefit—ideas like tackling hunger, housing, or healthcare—not just PR-friendly grants.
"Just do something… Build housing. Stop buying GPUs, build housing."
He mocks the vague promise style of "maybe one day you'll get a four-day workweek," and the small-sounding, hard-to-access programs that generate headlines but not impact.
"We're doing $50 million grants that no one can seem to access… things that sound good to the media."
He also calls for a language reset: stop framing everything as job apocalypse, stop fear-marketing, stop implying "AGI is imminent," and focus on describing products plainly.
"We are not going to talk about jobs anymore… We're going to talk about our products… in regular ways."
And he attacks what he sees as a recurring pattern: "this model is too scary to release"… followed by "we released it to banks and big institutions anyway."
"It's an endless cycle of: 'Oh, the models are scary'… other than releasing them to banks."
That contradiction, to him, is part of why public trust erodes. 😬
Next, the interview pivots to AI infrastructure—especially reports that OpenAI pulled out of another data center deal, this time in Norway.
Zitron's assessment of the overall buildout is simple:
"Not good."
He claims the press failed to connect the dots around the highly publicized "Stargate" data center project and related announcements. In his telling, big promises were made publicly, but progress is minimal and multiple projects appear stalled or dead.
"Why is the media not saying anything about the fact they all got hoodwinked about the $500 billion Stargate data center project?"
He describes a pattern: splashy announcements ("we are launching") without real agreements, followed by weak follow-up coverage when things quietly collapse.
"They never had an agreement… They did an announcement saying 'today we are launching'… No, you're not. You lied."
He argues this is enabled by media habits—reporting single facts "in a vacuum" rather than tracking credibility across time.
"It's almost as if the business and tech media is able to consider facts in a vacuum but never together."
He also adds practical constraints: data center development is hard, and he claims OpenAI lacks the credit, experience, or balance-sheet strength to execute what it implies.
"They have billions in losses… no one's going to extend them credit."
Then he turns it into a broader trust question:
"How much can we trust anything that OpenAI says anymore?"
Asked why OpenAI is drawing back on data centers despite large funding headlines, Zitron claims many of these projects were promotional—announced to generate hype, help partners raise money, or juice valuations, with the expectation that someone else would "figure it out" later.
"They just said these data center projects without really checking whether they were real."
He suggests that when projects fail, the company can blame regulation or energy costs—things that, in his view, should have been known from day one.
"Wow, Sam, power's expensive in England. None of us knew that."
He goes further, describing it as a con that relies on media and institutional cowardice—people unwilling to plainly say they're being misled.
"This is how a con works… exploiting the most powerful, most easily misled people—and also the biggest cowards."
And he ends this segment with one of his strongest allegations:
"Sam Altman is an inherently dishonest person… This is a company that lies."
The final major topic returns to Zitron's newsletter claims about AI service reduction and degraded product quality, focusing on Anthropic (Claude) as an example.
He argues that subscription pricing was misleading from the start because inference (running models) is expensive, and the "all you can eat" feeling was basically subsidized.
Tokens (simple meaning): chunks of text the model reads and writes. More tokens = more compute cost. If a service gives you lots of tokens for cheap, someone is eating the cost—until they can't.
"They should have never sold a monthly subscription. They cannot afford it. It is unsustainable."
He claims users are seeing:
"The service that you bought… is materially different."
He offers a concrete example anchored in dates: someone buying an annual plan in late 2025 would now be using a substantially altered product in 2026—with tighter limits and worse reliability.
"The product you used there does not exist anymore."
He says this should raise regulatory questions because workflows built under one usage regime can become impossible under new caps.
"If you built workflows… you cannot run those same workflows within these rate limits."
His larger prediction: as subsidies fade, enterprises will face real costs, and many will question whether this is worth "tens of millions a year," especially if the tools don't measurably improve outcomes.
"They start charging token rates… it doesn't seem like it's affordable."
He also warns about quality decay in software when teams rely too heavily on LLM-generated code without understanding or reviewing it.
"The tech stack… is getting worse because of these tools."
And he mocks the vibe of shipping code you didn't really read or understand.
"You just hit the button, ship the code."
Across the whole interview, Zitron's message is that the AI industry (and media ecosystem around it) is playing with fire by mixing:
"The dangerous rhetoric does need to stop."
His practical throughline is surprisingly consistent: de-mystify the tech, stop making models feel human, stop marketing fear, stop making job-loss predictions as hype, and start being accountable—because the public is not just skeptical, it's exhausted.
"Start talking about this as normal technology."
The interview argues that public backlash against AI isn't irrational—it's tied to how AI leaders talk, how media amplifies it, and how uneven the economic moment feels in 2026. Zitron condemns violence while warning that hype + threats + instability can lead to real-world harm. He also claims the industry's business fundamentals—data center promises and cheap subscriptions—look increasingly shaky as reality (costs, limits, cancellations) catches up.
Get instant summaries with Harvest