H
Harvest
AI Summarized Content

THE PEOPLE DO NOT YEARN FOR AUTOMATION | Decoder

Nilay Patel argues that a lot of today's AI hype comes from a mindset he calls "software brain"—the habit of seeing the world as databases + algorithms + loops that can be controlled if you just structure the data and write the right code. He explains why that mindset is incredibly powerful in business and computing, but breaks down when it hits real life—because people aren't neat datasets and society isn't a predictable machine. His main conclusion is blunt: the backlash to AI isn't a "marketing problem"—it's a human problem created by trying to force life to become legible to software.


1. The "Software Brain" idea—and why AI backlash is growing

Nilay opens with a concept he's been turning over for weeks while covering AI: "software brain." It's not "software" itself—it's a way of seeing everything through a software lens.

"I've been calling it software brain… a particular way of seeing the world that fits everything into algorithms and databases and loops."

He introduces the show and frames the point: software thinking basically built the modern world. He references Marc Andreessen's famous 2011 line as the archetype of this worldview:

"Software brain is powerful stuff… a way of thinking that basically created our modern world."

"Marc Andreessen… called it in 2011… 'Software is eating the world.'"

But AI has "turbocharged" software brain—and Nilay thinks it explains the widening gap between tech industry excitement and public dislike.

"Software thinking has been turbocharged by AI… [explaining] the enormous gap between how excited the tech industry is… and how much regular people are growing to dislike it."

Then he brings data showing that dislike isn't subtle—it's escalating, especially among Gen Z (who also uses AI heavily). He cites multiple polls: an NBC News poll where AI's favorability is shockingly low, and a Quinnipiac poll showing most Americans expect more harm than good, with high levels of concern and low excitement.

"It's fair to say that a lot of people hate AI—and that Gen Z in particular seems to hate AI more and more the more they encounter it."

"Poll after poll shows that Gen Z uses AI the most and has the most negative feelings about it."

He also highlights a Gallup result: Gen Z hopefulness dropping year-over-year, while anger rises.

"Only 18% of Gen Z was hopeful about AI… anger is growing."


2. "Social permission," real-world backlash, and a clear line against violence

Nilay says tech leaders know AI isn't popular, and he plays a clip of Microsoft CEO Satya Nadella describing the need for the industry to "earn social permission," especially around energy use for data centers.

"This industry… needs to earn the social permission to consume energy because we're doing good in the world."

Nilay immediately counters: the industry hasn't earned that permission yet, and he points to the political reality—local opposition to data centers, politicians losing elections over it, and even violent incidents targeting people connected to AI and data center expansion.

He makes a strong, repeated point: violence is unacceptable, and it ultimately undermines meaningful opposition.

"This violence is unacceptable."

"If you want to meaningfully oppose AI… speak loudly with your dollars… your attention… and even more loudly with your votes."

He adds a deeper critique: when people feel they have no agency—when the system makes them feel helpless—it can create nihilism, and powerful institutions have contributed to that mood.

"The political process… [should] make people feel empowered, not helpless."

"The violence is a result of that helplessness and nihilism."

Then he ties the emotional temperature back to what some AI leaders are saying out loud: that AI may erase huge categories of work. He plays a clip of Anthropic CEO Dario Amodei warning about entry-level white-collar jobs being replaced and an employment pipeline drying up.

"Entry-level… white collar work… may indeed… be replaced by AI systems."

"We may… have a serious employment crisis on our hands."

Nilay's takeaway is that this is where the "true gap" shows up: tech folks see an adoption problem, while regular people hear threats to livelihoods and stability.

"What I see… is the true gap between the tech industry and regular people… the limit of software brain."


3. The tech industry's blind spot: "AI doesn't have a marketing problem"

Nilay says many tech leaders misdiagnose public dislike as a marketing issue. He points to OpenAI spending big money on promotion and Sam Altman explicitly saying better marketing could fix AI's popularity.

"It feels like someone needs to say this clearly… AI does not have a marketing problem."

His argument is simple: people aren't reacting to ads—they're reacting to what they see every day: ChatGPT everywhere, AI overviews in search, and what he calls "slop" flooding feeds.

"People experience these tools every single day."

"You can't advertise people out of reacting to their own experiences."

That, he says, is the core disconnect:

"This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives."


4. What "Software Brain" means (and why it breaks in the real world)

Nilay defines software brain as seeing the entire world as databases you can control using structured language (code).

"The simplest definition… [is] when you see the entire world as a series of databases that can be controlled with structured language—software code."

To make it concrete, he lists familiar companies as "databases with interfaces":

  • Zillow = database of houses
  • Uber = database of cars + riders
  • YouTube = database of videos
  • (Even his own site) The Verge = database of stories

"Once you start seeing the world as a bunch of databases, it's a small jump to feeling like you can control the world if you can just control those databases."

Then comes the warning: databases don't perfectly match reality. He uses a political example—Elon Musk and "DOGE" going into government, trying to take control of databases first, then colliding with the messiness of reality.

"They ran into the fact that the databases didn't really reflect reality… This is the limit of software brain."

His big line is the one that keeps coming back:

"The government isn't just a bunch of databases… People aren't computers."

And he adds a detail that will feel familiar to anyone who has worked with data systems: when reality doesn't match the database, organizations often "fix" the database instead of the world.

"At some point, the database stops matching reality. And at that point, we usually end up tweaking the database, not the world."

From there, he claims the AI industry has "lost sight" of this and is making a dangerous request: instead of software adapting to people, people must conform to the database so AI can work better.

"The ask is for more and more of us to conform our lives to the database, not the other way around."


5. Law vs. code: why the similarity is tempting (and misleading)

Nilay shifts to an example he thinks about constantly: the claim that AI is coming for lawyers.

"The AI industry loves to talk about not needing lawyers anymore…"

He says he understands why people believe it—because "software brain" and "lawyer brain" overlap in a really seductive way. (He used to be a lawyer; his wife is a lawyer; many friends are lawyers.)

He explains the overlap:

  • Code is structured language that makes things happen in the real world (through software systems).
  • Law is structured language (statutes, citations) that makes things happen in the real world (through institutions).

"The overlap between software brain and lawyer brain is very deep—alluringly deep."

He also compares precedent in both fields:

  • Law relies on case law used repeatedly.
  • Engineering relies on code libraries reused repeatedly.

"Both lawyers and engineers do their best to use formal structured language to guide the behavior of complicated systems in predictable and potentially profitable ways."

He references Lawrence Lessig's 2000 book Code and Other Laws of Cyberspace, saying it remains relevant.

Then he explains how the similarity "trips people up": people try to issue commands to society like society is a computer. His favorite example is the classic viral Facebook post claiming Mark Zuckerberg has no right to copy your photos if you paste a legal-sounding disclaimer.

"People are constantly trying to issue commands to society at large like it's a computer that will obey instructions."

And then the key correction:

"The law isn't actually code—and society and courts aren't computers."

He explains this in plain terms: the legal system is not fully predictable because it isn't fully deterministic. You can't just plug facts into rules and reliably predict outcomes—because law needs interpretation.

"The law is not deterministic all the time."

"Our legal system actually requires ambiguity."

He emphasizes why that matters: ambiguity is why lawyers exist, why people hate lawyers, and why there's almost always room to argue another side.

"Ambiguity is what makes lawyers lawyers."

"It's always possible to find the gray area in the law."

This is the collision point: something that looks computable… isn't.

"This thing that looks like a computer isn't anything at all like a computer."

He then discusses a reform-ish vision: making law more like code—verifiable, consistent, automated. He mentions Bridget McCormack (former chief justice of the Michigan Supreme Court) pitching an AI arbitration system on Decoder, arguing people might accept even a worse outcome if the process feels fair—and AI can "listen" endlessly.

"People… will accept a worse outcome from an automated system as more fair if they feel heard."

"If there's one thing AI can do, it's sit there and listen all day and all night."

Nilay doesn't fully endorse it—but he uses it as an example of pure software brain: forcing the world to behave like a computer, then letting AI issue the instructions.

"The idea that we can force the real world to act like a computer—and then have AI issue that computer instructions."


6. Why AI fits business so well (and why that worries people)

Nilay zooms out to the enterprise world, where AI actually does fit neatly with software brain. He gives a cynical-but-real example: companies don't always hire consultants to truly study and improve operations—they hire them to produce slide decks that justify layoffs to boards and shareholders.

"You hire them to generate slide decks that justify layoffs…"

His point: this kind of work is exactly what AI is good at—repeatable, template-driven, language-and-data heavy. So consulting firms will automate it, and layoffs are already happening.

"Any repetitive business process that looks like code talking to a database is up for grabs."

That's why Anthropic has focused on enterprise, and why OpenAI is pushing business use: modern business is already a loop of collecting data, analyzing it, and acting—over and over.

"So much of modern business is already software collecting data… taking action on it over and over again in a loop."

He adds an important distinction: businesses can centralize and standardize their data. They can force their systems to talk to each other.

"Businesses also control their data—and they can demand that all of their databases work together."

And he lands on a sharp observation about where "cutting edge" marketing is heading:

"The absolute cutting edge of advertising and marketing is automation with AI. It's not being creative."

Then he draws the boundary line:

"But not everything is business. Not everything is a loop."

"The entire human experience cannot be captured in a database."


7. "The people do not yearn for automation" 🧠➡️👤

Now Nilay hits the title idea directly. Regular people don't automatically see "more code" as "more opportunity." They often see it as extra complexity, surveillance, or loss of control.

"Regular people don't see the opportunity to write code as an opportunity at all."

"The people do not yearn for automation."

He uses his own life as a contrast: he loves smart home automation (lights, shades, climate control—automated in tons of ways). Yet even with giant companies pushing smart homes for over a decade, most people still don't care.

"I'm a full-on smart home sicko…"

"Huge companies… have struggled… to make regular people care… And they just don't."

And he's blunt:

"AI isn't going to fix that."

Then he explains a hugely practical reason: most people's data is fragmented, and they prefer it that way. Email sits in Gmail, messages in iMessage, schedules in Outlook, workouts in Peloton—systems don't connect, and there's often no benefit (and plenty of creepiness) in connecting them.

"Most people aren't collecting data about every single thing they do."

"Those systems don't talk to each other… and they maybe never will because there's no reason for them to."

He notes that even thinking about how much of life is in databases makes people uneasy—because it points straight at surveillance and power for tech companies.

"No one wants to be surveilled constantly—especially not in a way that makes tech companies even more powerful."

But he says that database-izing everything is exactly what the AI industry is obsessed with: meeting apps stuffing in AI note-takers, tools like Canva connecting deeper into corporate systems, etc.

"Getting everything in a database so that software can see it is a preoccupation of the AI industry."


8. "Make yourself legible to AI" is a doomed strategy

Nilay brings in Ezra Klein's reporting from Silicon Valley, describing a culture where AI leaders are racing to integrate AI into everything—not just by using AI, but by making their lives fully accessible to it.

"They are racing one another to fully integrate AI into their lives…"

"That doesn't just mean using AI. It means making themselves legible to the AI."

Klein's point (as Nilay presents it) is that the AI becomes more valuable the more you open up—files, email, calendars, messages.

"The more of your life you open to AI, the more valuable the AI becomes."

Nilay then stakes out a product-design principle from years of reviewing tech: it's usually a failure when the product asks humans to contort themselves to fit the machine.

"It is a failure when you ask people to adapt to computers. Computers should adapt to people."

And he delivers the core verdict:

"Asking people to make themselves more legible to software… to turn themselves into a database, is a doomed idea."

"It's an ask so big I can't imagine a reward that would make it worth it for anyone."

He piles on why this "deal" sounds horrible to normal people: the same industry asking for total access is also warning about job elimination, rewriting the social contract, and even catastrophic cybersecurity risks.

"Does this sound like a good deal to you?"

"Can you market your way out of this?"

His answer: this bargain only makes sense if you already have software brain—if your instinct is to flatten everything into controllable datasets.

"This only makes sense if you have software brain."


9. Who AI feels exciting for—and why everyone else experiences it as a threat 😬

Nilay describes the kind of people currently getting the most joy from AI: those who naturally see repeatable tasks everywhere, and want to build automations, agents, and systems (the subtitles mention "swarms of … agents," meaning lots of automated AI workers running tasks in parallel).

"They're people who look at the world and see opportunities for automation… to collect data and build software."

For them, AI is great—and may permanently change our relationship to computers.

"AI is great for them… [and] will probably change our relationship to computers forever."

But for everyone else, AI doesn't feel like liberation—it feels like a hungry system demanding more data, more attention, and more dependence.

"For everyone else, AI is just a demanding slop monster. It's a threat."

He adds nuance: he's not claiming regular people never use spreadsheets or organizing tools, or that AI won't become useful over time. Some people like tracking data (he references his own wearable).

"I'm not saying regular people don't use Excel… or that AI won't be useful…"

"A lot of people enjoy data and tracking different parts of their lives."

The point is proportion and humanity: not everything should be measured, automated, and optimized—and not everything can be.

"These things aren't everything."

"Not everything about our lives can be measured and automated and optimized. It shouldn't be."

He ends with a sweeping critique: the tech industry is rushing to put AI everywhere at enormous cost—energy, emissions, manufacturing capacity, RAM supply—while staying trapped in software brain. And in doing so, they're implicitly asking people to become "less human."

"The tech industry is rushing forward to put AI everywhere at enormous cost…"

"Without realizing they are also asking people to be fundamentally less human."

And then the closing punchline—this won't be solved by superficial PR moves.

"And then they're sitting around wondering why everyone hates them. I don't think a couple haircuts are going to fix it."


10. Outro: how to contact the show

Nilay closes by thanking listeners and inviting feedback via email, and on social platforms (Threads / Bluesky). He asks people to share and subscribe, and notes Decoder is a Verge production.

"If you'd like to let us know what you thought… drop us a line."

"If you like Decoder, please share it with your friends and subscribe…"


Final Thoughts

Nilay's through-line is that AI backlash is rational when people experience AI as flattening life into data, demanding access, and threatening jobs—rather than adapting to human needs. Software brain built a lot of the modern world, but it hits a hard limit when it tries to turn society, law, and daily life into neat, automatable loops. His warning is simple: if the future of AI requires people to "become a database," most people will refuse—and no marketing campaign can change that.

Summary completed: 5/8/2026, 10:27:28 AM

Need a summary like this?

Get instant summaries with Harvest

5-second summaries
AI-powered analysis
📱
All devices
Web, iOS, Chrome
🔍
Smart search
Rediscover anytime
Start Summarizing
Try Harvest