
This documentary series follows VICE reporter Ben Makuch as he investigates how hacking, surveillance, and malware have become tools of geopolitics—used by governments, activists, criminals, and militias. Across multiple case studies (from Stuxnet to the Sony hack, Anonymous, the NSA's elite hacking, spyware mercenaries, Syria, and China's cyber-espionage), the show argues we've entered an era where digital actions can cause real-world damage and where blaming the right actor is often painfully hard. The big takeaway: cyber conflict is now a permanent layer of modern life—and "there's no putting this back."
The series opens with a simple but unsettling idea: cyber conflict is happening constantly, even when nobody hears explosions. Ben frames hackers as major players in modern power struggles—sometimes working for states, sometimes for extremist groups, and sometimes just because they believe they're doing the right thing.
"There are conflicts being waged all around us—ones we can't see."
"Hackers are poised to dominate the 21st century, reshaping geopolitical landscapes."
Ben explains that one particular computer virus made the world realize how far cyber conflict had evolved: Stuxnet (first publicly discovered in June 2010). To understand why it mattered, the story rewinds to the early 2000s, when the U.S. feared Iran was secretly building nuclear weapons. The UN imposed sanctions, the U.S. and Israel threatened war, and then—rather than bombs—something new appeared: a piece of malware that looked like sabotage.
Ben visits Symantec (the subtitles repeatedly say "Semantic," but the context is clearly Symantec) to meet researcher Eric Chien, one of the people who dissected Stuxnet. Eric sets the tone by describing how unusually complex it was: most malware takes minutes to understand; this took months.
"The average threat… can take us 5 to 20 minutes… and Stuxnet took us months—more than three months."
Eric explains that Stuxnet contained multiple zero-day vulnerabilities—security holes that nobody has patched yet (because they aren't known publicly). A zero-day is especially dangerous because you can be compromised even if you do "nothing wrong"—no clicking, no downloading.
"Your computer just has to be on… you don't have to double-click on any files… and so that means you have no way to protect yourself."
And then comes the shocking detail: Stuxnet didn't have just one zero-day.
"This thing had four zero-days inside of it."
That alone hinted at a huge budget and serious engineering. But the biggest clue was that the code referenced SCADA (industrial control tech used in things like power plants, automation, and robotics).
"We had never seen a threat that mentioned anything to do with SCADA."
"This isn't like two kids in the basement… This thing had a full-on framework… quality assurance behind it."
Eric's team discovered Stuxnet targeted Siemens software (Step7 / WinCC) used to control PLCs (Programmable Logic Controllers)—special-purpose computers that turn digital commands into physical actions (motors, valves, turbines, centrifuges, heating/cooling systems, etc.). Eric admits that at first, even his team didn't fully understand PLCs—so they publicly asked experts for help.
"Contact us… because we didn't even know what a PLC was at that time."
Ben also interviews Sean McGurk from the U.S. Department of Homeland Security's cyber branch at the time. McGurk describes Stuxnet like a kinetic weapon (a physical weapon such as a missile): it has a delivery method to reach the target, and a payload that causes damage.
"If you think of Stuxnet like… a missile… you had the delivery vehicle… and then the payload itself."
He highlights two unprecedented features:
"Normal malware doesn't go after control systems. And this was specifically focused on control systems."
The investigation eventually tied Stuxnet to Natanz, a uranium enrichment facility Iran had not properly declared, raising fears it was intended for weapons. Nuclear-policy expert James Acton explains why enrichment is "dual-use": the same process can fuel reactors or help make weapons-grade material.
"Any enrichment is inherently sensitive… you can use it for fuel… or… nuclear weapons."
Natanz's scale also looked suspicious to observers.
"It was scaled as though it was right for making enriched uranium for weapons…"
The documentary then connects the geopolitical pressure points: sanctions, Iranian program suspension (2003), restart (2005), and mounting Israeli pressure by 2009.
Stuxnet is explained in an easy-to-visualize way: it records normal operations (about 30 days) so it can replay "normal-looking" data to operators while sabotaging the machinery. In other words, the operators "see" everything is fine while it's breaking inside.
"Like a security camera, the virus records 30 days of normal centrifuge operation… then… plays back the pre-recorded data so operators… can't see the infection."
Eric explains that 30 days wasn't random—it matches how long it takes to fully load a cascade of centrifuges with uranium gas, meaning the sabotage was timed for maximum damage. Centrifuges normally spin at about 1,000 Hz, but the malware forced them to speed up to 1,400 Hz or slow to 2 Hz, causing violent instability.
"They would… shatter… shards of aluminum flying… uranium gas leaking everywhere."
Even the emergency shutdown ("big red button") wasn't safe—because that signal also went through computers, and Stuxnet hijacked it.
"The operators were doomed. The plant was doomed."
Stuxnet becomes the first publicly known cyber weapon to cause physical destruction.
Natanz wasn't connected to the internet—so Ben asks: how did it get infected? He meets an operational security expert ("Darknet J") who demonstrates the likely method: an infected USB stick used by someone who physically entered the facility.
"It jumped the air gap by traveling on a USB stick…"
The demo shows how Windows can load a malicious payload just by viewing a folder (via icon/shortcut handling), leading to full control: writing to disk, stealing credentials, and spreading through local networks.
"It can have complete control of your computer…"
Ben summarizes the implication: someone walked it in, possibly unknowingly.
After Stuxnet's details were published, Natanz shut down. Soon after, Iranian nuclear scientists were attacked—an event that frightened the researchers involved, making them feel they'd stumbled into an intelligence war.
"We would look in our rearview mirrors all the time… you know, I would see a motorcycle…"
Iran blamed Israel (and later accused the U.S. too), but evidence stayed murky—until a major turning point: a New York Times report two years later claiming the U.S. (with Israel) created Stuxnet as part of a covert operation called "Olympic Games." Journalist Kim Zetter argues the U.S. link is extremely likely—pointing to how carefully the malware was constrained to specific configurations.
One memorable line captures that it wasn't just engineers involved—it looked like a project with heavy legal oversight.
"You can see lawyers' fingerprints all over Stuxnet."
Officials still avoided confirmation.
"To this day, the US government will not confirm or deny its role…"
Experts agree Stuxnet likely slowed Iran's program (estimates range from 6 months to 2 years) and may have reduced pressure for immediate airstrikes, buying room for diplomacy. But several interviewees argue it also backfired politically—convincing Iran it was under siege and accelerating investment in cyber capabilities.
"It delayed Iran's program… but… convinced Iran that they were under siege."
"It was a technical response that… slowed the program down but… politically… helped to accelerate the program."
Iran ultimately agreed in 2015 to limit its nuclear program in exchange for sanctions relief, but the documentary argues Stuxnet triggered something else: a cyber arms race.
"This was an act of war… without there being a war."
"The US opened a door that everyone is going to walk through."
DHS veteran Sean McGurk compares Stuxnet to the first nuclear test—an irreversible demonstration that changes the world.
"Stuxnet to me was a Trinity moment."
"There's no putting this back… Pandora's box was now out in the open."
After Stuxnet, the series widens the lens: if malware can break centrifuges, it can potentially disrupt power, water, pipelines, dams, transportation, finance, and healthcare—the "boring" systems that keep society running.
"The industrialized world runs on an infrastructure that we take for granted."
Industrial control engineer Joe Weiss explains the nightmare of diagnosing failures: you can see the lights go out, but you may not know whether cyber caused it.
"What you don't know is: did cyber play a role?"
Ben visits a California power facility and hears claims that China allegedly hacked systems in the early 2000s, targeting the California Independent System Operator (which coordinates grid operations). The point isn't that they definitely flipped switches—but that access could have allowed massive disruption.
"They could have affected… power to hundreds of thousands of customers."
Protocol expert Meredith Patterson explains that industrial systems often rely on complex communication "languages" (protocols). When multiple vendors implement them differently, the mismatches can create vulnerabilities—like machines "speaking different dialects" and misunderstanding each other.
Her key point is blunt: if an attacker can control the inputs, they can send false or malformed commands and potentially cause dangerous real-world effects.
"It is remarkably easy to just mess with the temperature… and catch the entire plant on fire."
Former DHS Secretary Michael Chertoff warns that as the Internet of Things makes everything "smart" (and connected), attackers may affect physical operations through wireless entry points. He also raises a hard legal question: what counts as an act of war in cyber space—especially if cyber tools could "blow up a power plant" or cause an airliner crash?
"We've got to begin to think about… the rules of war… if… you wind up with a cyber war."
Security consultant Chris Kubecka shares a dramatic example: Shamoon malware (2012) wiping 35,000 computers at Saudi Aramco. People physically unplugged machines to stop spread. About 85% of IT systems were knocked out—servers, payroll, databases, even VoIP phones. She believes it was intended to hit production systems too.
"Individuals… physically pulled plugs…"
"It appeared that the attack was meant to target the production systems…"
She says critical infrastructure attacks have increased since Stuxnet and Shamoon, partly because curiosity grows and systems become openly connected.
"As curiosity peaks… attacks are going to get more and more."
U.S. officials later called the risk a "cyber Pearl Harbor."
"The collective result… could be a cyber Pearl Harbor."
Ben learns about Shodan, a search engine that indexes internet-connected devices—not just websites. Its creator John Matherly shows a globe of exposed control systems, many with no authentication.
"These are control systems… exposing the raw protocols… There's no authentication… you just connect and you have full access."
The U.S. appears as a dense cluster simply because it's highly connected.
"America is just a big red blob."
Shodan reveals how easily "private" industrial tech ends up reachable from the public internet, often because engineers wanted convenient remote maintenance.
Security entrepreneur Stuart McClure demonstrates PLC hacking using a test rig controlling a pump/compressor to overpressure a bottle until it explodes. With a short script (Python), he overwrites memory addresses to disable safety logic and take control.
"It controls the physical world…"
Ben watches the explosion and reacts like anyone would.
"That actually sounded like a bomb."
The takeaway is structural: many PLCs were designed decades ago to function reliably, not securely.
"They never really considered security from the ground up."
"It's built so foundationally insecure… it makes it incredibly easy for attackers."
Obama cyber advisor Michael Daniel says the biggest fear is a critical infrastructure attack that triggers unintended consequences, because we don't fully understand interactions in complex systems. He names likely adversaries (including Iran and North Korea) and notes a harsh truth: complete prevention is impossible.
"You cannot prevent all cyber intrusions… Everything is penetrable eventually."
Ben tours a DHS cyber center and meets officials tasked with protecting U.S. infrastructure. They differentiate between:
"A lot of what we see is reconnaissance…"
They admit the "act of war" threshold isn't well defined, partly because attribution is hard: attackers route through many countries and hijacked computers.
"They may hop multiple points… enlist computers that they've hijacked…"
Ben concludes this section with a grim policy dilemma: if you can't confidently prove who attacked, deterrence becomes shaky.
"That ambiguity… is one of the obstacles to… a clear deterrent policy."
The story shifts from infrastructure to a cultural corporate catastrophe: the Sony Pictures hack (starting with an extortion email on Nov 21, 2014). A group calling itself Guardians of Peace appeared on Sony computers; over weeks, huge amounts of data were dumped: films, salaries, corporate files, and most infamously, executives' private emails.
Hollywood Reporter editor Kim Masters describes how the leak terrified the industry—because everyone felt vulnerable, and many realized emails had become a liability.
"There was a sense of instant fear throughout Hollywood…"
Embarrassing exchanges became public, but the harm went beyond gossip: employees' Social Security numbers, medical records, and private data went online.
Ben interviews "Selena," a Sony employee who quit after the hack. She describes receiving a memo warning that anyone who ever worked for Sony might have had data exposed—and then finding her own documents via a simple Google search, listed in plain directory form with file names that clearly identified what they contained.
"I literally went to Google…"
She also describes the surreal operational breakdown inside Sony: computers useless, people joking they were "working analog," and the company falling into party mode because so much normal work stopped.
"We started saying we're working analog."
She recalls a speech where Amy Pascal "challenged the hackers."
"This wasn't going to get us down… we're going to beat you guys."
Pascal later resigned—an example of reputational and leadership fallout.
Selena's emotional bottom line is that workers paid the price for corporate failure.
"They kind of made you feel like it was your fault…"
"It sucks that we're collateral damage… this is basically what it is… it's like a nerd war."
Media attention centered on Sony's film The Interview, a comedy involving the assassination of Kim Jong-un. Threats appeared warning of attacks on theaters; Sony pulled the release. Then President Obama publicly attributed the attack to North Korea, and the U.S. retaliated with sanctions—one of the first times a U.S. president directly blamed a nation-state for a major cyberattack on U.S. soil.
FBI agent Brett Leatherman describes the likely steps:
"Admin credentials are key in going laterally…"
A key feature was the destructive malware wiping data—unusual in corporate hacks.
Even as officials suggested North Korea, many hackers and experts doubted the evidence. Researcher Mark Rogers argues the attackers' "agenda" changed multiple times (extortion → other messages → movie narrative), suggesting multiple groups piled in—a "hacking party." He also notes that code reuse and opportunism are common.
"That kind of implies multiple different actors to me."
"An opportunist… then… other groups piling in…"
"This isn't an attack that requires nation-state intent… it requires a couple of guys being bored."
Another important lesson emerges: IP addresses in malware aren't definitive proof because attackers can bounce through hacked machines in any country.
"Break into a machine in North Korea… in Russia… in China… These are all things you totally can do."
An attorney representing employees argues Sony didn't take reasonable steps: data should have been encrypted and segregated so it was harder to steal.
"Sony just didn't do what a reasonable company should have done…"
Ben closes the Sony arc with a bleak conclusion: whether the attackers were "North Korea or North Dakota," the deeper issue is vulnerability and the near-impossibility of definitive attribution.
"Ultimately, it doesn't matter whether the hackers came from North Korea or North Dakota…"
"Definitively attributing a cyber attack can be almost impossible."
Ben moves to a different kind of cyber actor: Anonymous, introduced as "a brand, a meme, and a movement," instantly recognizable by the Guy Fawkes mask. Anthropologist Gabriella Coleman describes Anonymous as mysterious, controversial, and radical—but also as a catalyst that gets people energized about activism and whistleblowing.
"They are ghostly… spectral… mysterious…"
"It's a protest movement… about getting people excited for activism…"
Anonymous began in the early 2000s as trolls on 4chan, not as activists.
"Anonymous was never meant to become an activist phenomenon."
A pivotal moment came in 2008 when Anonymous targeted the Church of Scientology after it tried to suppress a leaked Tom Cruise video. Anonymous used prank calls, pizza deliveries, black faxes, and DDoS attacks (flooding traffic to crash a site temporarily).
"A DDoS attack floods a website with so much traffic that it crashes."
During this campaign, Anonymous debated shifting from trolling to earnest protest—and they did, organizing real-world street protests where thousands showed up.
In 2010, after Visa/Mastercard/PayPal cut off donations to WikiLeaks, Anonymous launched Operation Payback, described as one of the largest protest DDoS actions ever—multi-day attacks organized by thousands.
"The largest protest DDoS that the internet has ever seen."
This put Anonymous on the FBI's radar. Former FBI cyber official Sean Henry emphasizes the investigative challenge: who do you investigate when identity is fluid and globally dispersed?
"Anonymous? I mean, just the very name… Who is it that we're trying to investigate?"
He also draws a legal line: protest exists, but disrupting networks and stealing data is illegal under current law.
"Using the network to disrupt… or to breach data… is not one of those ways."
At a hacker camp, Ben meets individuals tied to Payback and Anonymous culture. One participant frames rights as universal: if you deny rights to someone you dislike, you end up with "privileges," not rights.
"Everybody has them or they're just privileges."
But the series also explains the split: some wanted politics; others wanted "the lols" (doing things for humor/chaos). That breakaway was LulzSec. One member gives a warning that becomes a recurring theme in cyberwar:
"There's nothing more dangerous than a bored hacker."
They describe many targets as trivially insecure—sometimes "a three-year-old could do it"—and talk about "justice" logic: if powerful institutions abuse, hacking them feels like payback.
"A three-year-old could do it."
"If they're hacking, then why not let us hack them?"
As law enforcement intensified (2011–2012), members were raided and arrested. Several avoided long prison time, but the movement was shaken by a betrayal: Hector Xavier Monsegur ("Sabu") became an informant after the FBI allegedly pressured him with threats involving his foster children.
"Work for us or we will take your foster kids away…"
Sabu helped lead the FBI to hackers including Jeremy Hammond, who hacked Stratfor (a private intelligence firm), stealing emails and credit card numbers and leaking emails to WikiLeaks. Hammond, speaking from prison, frames Stratfor as mercenaries who monitor and dominate, making them a "natural target." He describes Sabu's cooperation as betrayal.
"It's dishonest. It's betrayal."
He also accuses the FBI of allowing harm to happen to trap people rather than prevent damage.
"They could have prevented all this damage… but… were more interested in trapping people…"
Security expert Robert Hansen argues many governments try to infiltrate Anonymous—and implies it's deeply compromised.
"Most of them."
Another voice insists Anonymous can't die because it's an idea.
"It's an idea. You can't kill or jail it."
A Canadian participant ("Bio") describes DDoS as "poking" a site and argues harsh punishment (especially with expanded surveillance laws) can radicalize rather than deter. For some, this becomes life-defining work.
"This is my life's work… and this ends if they lock me up forever."
The series then pivots from outsiders to the state's most powerful cyber apparatus: the NSA. It explains the NSA's mandate as foreign intelligence—and how, after 9/11, surveillance expanded dramatically, including Americans' communications tied to foreign targets.
Whistleblower Thomas Drake describes NSA's origin as unusually secretive ("signed into existence" rather than created by Congress) and argues that after 9/11 the agency became "unleashed," pursuing mass collection.
"Collect it all so we can know it all."
He frames the post-9/11 mindset as "all means necessary," even at the expense of constitutional rights.
"Who cares about the Constitution?"
Drake's warnings foreshadow Edward Snowden's 2013 leaks (Verizon metadata, PRISM, etc.), and the documentary highlights a lesser-noticed reveal: TAO (Tailored Access Operations)—an elite hacking unit.
Reporter Jörg Schindler (Der Spiegel) describes TAO as the NSA's special hackers—like plumbers who can get into any pipe.
"The highly skilled plumbers of the NSA…"
He uses a memorable analogy:
"Mass surveillance is like… a huge fishing net…"
"TAO… is like using the harpoon…"
Ben meets former NSA exec John Hultquist / Harbaugh (the subtitles give "John Harbaugh") now running "Route 9B" (named for "root access" + 9B hex = 9/11), reflecting a belief that the next major crisis could be cyber-related.
"The next 9/11 event is most likely going to be cyber related."
He describes leading a tiny team tasked with urgent, high-stakes cyber missions—similar to special operations: short briefings, strict objectives, rapid execution.
"We have a significant national event… I need you guys to do this in the next 12 hours."
ACLU technologist Chris Soghoian explains the NSA competes with Silicon Valley for talent but can't match perks—so it offers something else: lawful power to do things that would otherwise be criminal.
"If you want to hack into systems lawfully, the only game in town is the government."
He critiques the moral framing of wrapping intrusive capability in patriotism.
"Suddenly you get to wrap yourself in the flag…"
The show introduces "implants" (spy devices planted into phones or hardware) and a leaked "ANT catalog" of espionage tools. Security researchers demonstrate recreations, including RAGEMASTER: a tiny chip placed in a video cable that can leak screen information via radar reflections—showing that sometimes cyber operations require physical access, not just remote exploits.
"By measuring that reflection… I can… recover… a screen image…"
A major critique appears: TAO sometimes finds software vulnerabilities (like a Firefox bug used to identify Tor users) and exploits them rather than immediately disclosing them—leaving millions vulnerable until patched.
Citizen Lab–linked researcher Claudio Guarnieri explains how hacking internet backbone infrastructure enables both observation and traffic hijacking—making surveillance systemic.
"Pretend like you're getting a response from Google… while instead you're getting a response from the NSA."
He also notes the NSA spends huge sums (hundreds of millions) on offensive hacking, and warns that the balance between breaking things and protecting things is skewed.
"Once it's broken for one, it's broken for all."
Former Air Force cyber officer Robert M. Lee argues TAO-style operations are more targeted and therefore preferable to mass surveillance—because it requires prioritization.
"Mass surveillance sucks. We need more targeted…"
But journalist Ryan Gallagher pushes back: as encryption spreads, agencies can't passively eavesdrop, so they move toward "active surveillance"—hacking systems. Over time, hacking itself could become a mass technique.
"You're going to see more and more of these hacking attacks…"
And once targeting definitions expand, "legitimate" targets can include friendly leaders (like Mexico's president) or NGOs, raising ethical and political questions.
"They have no respect whatsoever towards foreigners…"
Drake closes this arc with a warning about secret power and history, referencing Orwell's 1984—and insists his whistleblowing was worth it.
"History was at stake."
Next, Ben explores the booming private industry selling surveillance as a product: commercial spyware. The human story starts with Ethiopian journalists in exile working at ESAT, describing themselves as "the voice for the voiceless," and explaining why their government targets media.
"Media is their first enemy… whenever you speak against the government, you are a terrorist."
An ESAT staffer describes accepting a Skype request that looked legitimate (logo similarity), receiving a PDF, opening it, and watching the computer break—classic social engineering.
Citizen Lab researcher Bill Marczak investigates by analyzing the server the spyware communicated with. He explains spyware must "phone home" to send stolen data. A clue appears in an SSL certificate referencing RCS (Remote Control System)—a product linked to the Italian company Hacking Team.
The documentary explains that spyware vendors (Hacking Team, FinFisher, Cyberbit, etc.) can:
Citizen Lab traced the infrastructure further and found signals pointing to Ethiopia's INSA (Information Network Security Administration).
"I was like, 'Okay, Google, what is INSA?'… Government of Ethiopia. I was like, 'Okay, this is it.'"
Citizen Lab director Ron Deibert offers a nuanced view: surveillance will always exist—governments inherently surveil—but the key is checks and balances and human rights safeguards.
"Surveillance in and of itself is not a bad thing…"
"The question is: what is that surveillance for?"
Ben interviews Hacking Team's spokesperson Eric Rabe, who argues privacy absolutism is unrealistic, and their tools are necessary because encryption blocks investigators—so they must access data on the device before encryption or after decryption.
"The only way… is by accessing… on the device itself."
He claims they suspended Ethiopia once misuse was discovered and argues they aren't the world's human rights police.
"We're not the principal human rights enforcement agency for the world."
But ESAT staff say colleagues ended up jailed, implying spyware had real consequences.
"Three of our contacts are now in jail."
In July 2015, hacker Phineas Fisher breached Hacking Team and leaked 400GB of internal data—client lists, prices, source code, and sales to questionable regimes. Some celebrated; Hacking Team called it criminal, not Robin Hood.
"No, it's not Robin Hood… It's Al Capone."
Ben negotiates an interview with Phineas Fisher—conducted via text and represented by a puppet. Fisher says the goal wasn't to magically stop the company but to set it back and give targets "breathing room."
"Hopefully it can at least set them back a bit…"
Fisher also argues spyware companies often enable "comic book villain level of evil," targeting journalists and dissidents more than criminals.
"Mostly… investigative journalists, dissidents, political opposition…"
The leak reveals suppliers like Netragard selling zero-days to spyware companies. Ben interviews Adriel Desautels, who admits selling an exploit to Hacking Team and wrestles with responsibility: he believes sellers should avoid supplying actors likely to do harm, but he also argues misuse is primarily the end user's fault—using analogies like Nike shoes or Microsoft Windows.
"When you misuse a zero-day, the end user. Absolutely."
Privacy International's Edin Omanovic compares spyware trade to conventional arms: weapons exports are regulated; surveillance exports often aren't (or weren't), because the tech is new. He estimates the industry around $5B/year—but stresses nobody truly knows because secrecy is the business model.
"Because it's so secretive… nobody actually knows."
EU parliamentarian Marietje Schaake pushes for updating laws to prevent "unintended consequences" and stop the unregulated gray market from growing.
"Our laws are outdated and desperately need to be updated…"
The documentary explains the Wassenaar Arrangement (41 countries; intrusion software added in 2013) and presents criticism: researchers argue it can slow urgent cross-border incident response, potentially helping attackers.
Still, consequences hit Hacking Team: Italy revoked their global export license, forcing per-export approvals outside Europe—hinting the era of unregulated spyware may be tightening.
The section ends with a clear warning: without strong rules, anyone can become a target.
"Anyone can be a target… including me, including you."
Ben turns to Syria, explaining that the Arab Spring was fueled partly by social media—but Syria initially remained quiet under Assad's dictatorship, intense secret police, surveillance, and limited internet access. Then the regime restored access to Facebook and YouTube, and protests surged.
Former U.S. ambassador Robert Ford recalls early absurdity: security forces didn't even understand what Facebook was, asking detainees "Where is the Facebook?" like it was a physical object.
"Where is the Facebook? Where is the Facebook?"
But they learned fast—monitoring social media and computers more effectively over time.
Syrian activist/media worker Dillshad Othman describes the regime hacking opposition accounts. He explains a man-in-the-middle attack in simple terms: the attacker inserts themselves between you and a real service (like Facebook), using a fake SSL certificate so your "secure envelope" goes to them first. They copy your credentials and messages, then forward traffic so you don't notice.
"They pushed… a fake SSL certificate…"
"They replaced it with another one that they have the key for…"
This helped expose activist networks; arrests and torture often followed.
The documentary introduces the Syrian Electronic Army, described as pro-Assad propagandist hackers who defaced sites and hijacked accounts. One famous incident: they took over the Associated Press Twitter and posted a fake White House attack tweet, briefly shaking markets.
Reporter Brian Merchant explains SEA's role as propaganda and intimidation. Ben communicates with "The Pro," who prefers the term "internet soldier" and denies official government affiliation—yet evidence suggests coordination and intelligence-sharing.
"Internet soldier was the preferred term."
Merchant says SEA compromised thousands of accounts and passed information to the regime.
"Absolutely… exchanging information… exchanging targets…"
Later, the alleged leader was indicted by the U.S.
Researcher Eva Galperin describes multiple "Syrian malware teams" targeting opposition members inside Syria and in the diaspora using cheap or free Remote Access Trojans (RATs) like DarkComet. A RAT gives an attacker control like the user: keylogging, screenshots, webcam/mic, and exfiltration.
"Allows them to do anything that you can do on your computer."
She emphasizes the danger: in a war zone, compromise can lead to arrest or death.
"Things got very dangerous… arrested and then killed."
Researcher Nart Villeneuve explains a campaign where attackers posed as attractive women via Skype, built rapport, then sent "pictures" that were actually malware—allowing them to harvest chat histories and documents. Stolen content included annotated maps, unit lists, names, phone numbers, weapon status—tactically valuable intelligence.
"That stuff is extremely valuable in a conflict zone."
Attribution remains difficult, but evidence suggested activity benefiting the Assad regime, possibly linked to Lebanon and Hezbollah support (consistent with regional alliances).
When ISIS rose, it also used online propaganda and cyber tactics to target critics. Activist Rami Abdul Rahman describes being hacked and threatened (including grotesque photo manipulation). Ben explores ISIS's "cyber army," tied to Junaid Hussain (aka "Trick" from Team Poison), a British hacker who radicalized—possibly accelerated by prison—and later became a high-value U.S. target, killed in a 2015 airstrike.
Former associate "MLT" describes the transformation:
"He was just… normal… and then… it seemed to happen overnight."
The arc culminates with activists from "Raqqa is Being Slaughtered Silently" describing ISIS attempts to locate and kill them through phishing and malware.
The broader conclusion is chilling: cyber tools are now widely available, cheap, and effective—meaning digital tactics are no longer only for major states.
"These tools are no longer just in the hands of… governments…"
And in Syria, the biggest danger is often not advanced exploits—it's that ordinary people lack the knowledge and operational security to protect themselves and their networks.
"Security is not about yourself only… it's about the whole network…"
Syria becomes a "terrible window" into future warfare: online and offline battles intertwined, crushing speech and reform.
The final arc focuses on Chinese cyber operations, beginning with the 2015 U.S. Office of Personnel Management (OPM) breach: records of 22 million U.S. federal employees and contractors stolen (including security clearance forms SF-86/SF-85). A U.S. official explains the damage could last decades because the data can be used to target people for recruitment, blackmail, or impersonation.
"Maybe decades."
He says he was personally affected.
"Like a victim."
The documentary distinguishes "classic espionage" (states stealing secrets) from commercial theft (stealing intellectual property to boost domestic companies). President Obama publicly sought agreements with China not to support cyber-enabled theft for commercial advantage.
In January 2010, Google publicly disclosed it had been hacked and blamed China—an operation later called Operation Aurora. Reporter Nicole Perlroth explains why it mattered: not because it was a brand-new technique, but because it was the first major U.S. company willing to publicly accuse China.
"The first time an American company had the courage to stand up… and say, 'We know you did this.'"
She explains that China's hacking aligns with industrial priorities: as China tries to move from manufacturing to innovation, cyberattacks surge toward the sectors China wants to dominate—paint formulas, negotiation strategies, academics, law firms, think tanks, diplomats.
"Every 5 years you see a new industry that China wants to excel at…"
Analyst Dmitri Alperovitch describes the Aurora method: social engineering targets into clicking malicious links, compromising machines and setting up command-and-control infrastructure. Attribution came from tracing the compromised machines and patterns of activity to known Chinese-linked groups (e.g., "Aurora Panda"), continuing to conduct espionage today.
"They want to stay as long as possible because they have a collection priority."
He frames it as mission-driven military work: attackers persist even after discovery.
"You don't stop because it's hard… you keep going because you were given a mission."
He demonstrates how easy exfiltration can be once you control a victim machine—essentially clicking files and pulling them over.
Mandant and journalists traced thousands of attacks to a building tied to PLA unit 61398 in Shanghai—making it one of the first widely publicized links between cyberattacks and a specific military unit.
"The first time… pointed… to a very specific unit of the PLA."
Journalist Melissa Chan explains a key cultural/political factor: Chinese companies often have deep party-state ties, even when they appear private—like having a Communist Party presence embedded.
"There's a corner office somewhere where there's a Communist Party representative."
Security expert Ian Amit recalls being openly approached to work with Chinese-linked organizations after speaking at a conference—offered resources and staff at scale. He warns against underestimating Chinese capabilities.
"If you assume any less… you're a fool."
In 2014, the U.S. Department of Justice indicted five PLA officers for hacking U.S. corporations—placing them on the FBI's most wanted list. DOJ official John Carlin describes patterns like a "9-to-5 government job"—activity spikes, lunch breaks, then resumes.
"A government job."
He explains the strategy of charging individuals rather than "a country": prove specific actions tied to named people, showing attackers they aren't truly anonymous.
"We were able to figure out the name and the face behind the keyboard."
"They think they're anonymous. And these charges show: you are not."
Ben interviews AMSC president Daniel McGahn, whose company's wind turbine control software was stolen—partly through bribery of an employee who handed code to a state-owned Chinese firm. The fallout was massive: the company's valuation collapsed rapidly, and they continued facing attacks.
"Our stock… completely collapsed."
In a painful twist, a wind turbine sold back to the U.S. used the stolen code—until AMSC replaced the controller/software.
The documentary acknowledges the U.S. also targets Chinese companies (e.g., NSA targeting Huawei source code), but argues the U.S. position is that there's a difference between espionage for security and theft for profit. The U.S. threatens sanctions to raise costs because the theft is economically motivated—even down to absurd examples like stealing the formula for "the color white" paint.
"They were stealing the formula for the color white."
Nicole Perlroth ends with uncertainty: it may already be too late, and the world may soon see Chinese "carbon copies" of major U.S. brands built on stolen IP.
"Is it too late? Has the IP already left the building?"
Across every episode, one pattern repeats: digital access creates real leverage, whether it's centrifuges shattering, oil companies paralyzed, employees exposed, activists hunted, or economies undermined. The show's most haunting message is that cyber capability spreads fast—and once a technique is revealed, it becomes part of everyone's arsenal.
"The key was turned… and everything in Pandora's box was now out in the open."
If you take one practical lesson from the whole season, it's this: security is never just personal—it's systemic, and the hardest question in cyberwar often isn't "how did they hack it?" but "who did it, and what do we do next?"
Get instant summaries with Harvest