
This video explains why Anthropic's new AI model, Mythos, alarmed regulators and bank CEOs: it appears able to find and exploit cybersecurity weaknesses far more effectively than typical tools. Bloomberg reporters describe an urgent, high-level response involving Wall Street, Washington, and even foreign central banks, plus an unusual move by Anthropic to share limited access with competitors through Project Glasswing. The big takeaway: Mythos suggests a "theoretical" AI cyber threat has become practical and immediate, and similar models may appear within 12 months.
The video opens with a stark framing: for years, people have warned that advanced AI could supercharge hacking, but it often sounded hypothetical. Mythos, the speakers say, changed that—by demonstrating that the risk is no longer just an idea.
"We have been hearing about this theoretical threat that AI poses to cybersecurity, and what Mythos has proven is that that threat is a reality."
Right away, the focus lands on "systemically important institutions"—basically, organizations so crucial (like major banks) that if they're hit, the damage can spread across the whole economy.
"There's a huge concern that these systemically important institutions are at the highest risk of being attacked."
In other words: it's not only about one company getting breached—it's about financial stability and national security.
Next, the story moves into Washington, where the response escalates quickly. The video says top banking CEOs were called into an urgent meeting in DC by leading banking regulators—explicitly including the US Treasury Secretary, Scott Bessent.
"Some of the top banking CEOs were invited by the top banking regulators, including Secretary of the Treasury, Scott Bessent, to an urgent meeting in DC about Anthropic and its Mythos AI model."
The presence of Jay Powell, Chair of the Federal Reserve, is highlighted as a major signal that this is not a niche tech concern or political theater—it's being treated as a serious systemic risk.
"The Treasury secretary Scott Bessent called this meeting and he attended it with Jay Powell… and the fact that the two of them were prepared to sit in a room together… shows that this is more than just about politics."
Plain-English meaning: if the Treasury Secretary and the Fed Chair are jointly convening bank CEOs about one AI model, it suggests policymakers believe Mythos could affect the core plumbing of the financial system.
The video explains that Mythos is Anthropic's latest model, and it was originally planned for general release—meaning broadly available. But Anthropic reversed course after learning how it could be used.
"Mythos is Anthropic's latest AI model that it had planned for general release, but has decided to take back and limit after discovering that it could be used for quite nefarious purposes."
Here, "nefarious" basically means criminal or malicious, especially in a hacking context.
Then the capabilities described are blunt: Mythos can apparently identify security weaknesses across a wide range of common software.
"It has the ability to detect vulnerabilities in basically every web browser, every computer system that it's so far been tested on."
The key detail is how it does it: it's not just assisting a human expert. The subtitles emphasize that it was largely autonomous—it could go find the bugs and design a path to exploit them.
"Largely it was autonomously able to go and find bugs and exploits and come up with a plan on how to take action on them and potentially do some harm."
To make that clearer:
So the fear isn't only that it can spot problems—it can also turn them into an actionable attack plan.
The video then zooms in on why banks and other major institutions are so worried. One reason: large, complex organizations often have legacy systems—older technology that still runs critical operations and may have unpatched weaknesses.
"You're talking potential holes in infrastructure that these banks might have been sitting on for years or decades and maybe haven't patched up."
The subtitles list examples of targets Mythos could potentially help compromise, spanning everyday tech and high-stakes finance:
"That could be finding issues in financial payment services. Web browsers, operating systems, like on iPhones and Android."
And the warning gets even more dramatic: mishandled, Mythos-like capability could create historically unprecedented cyber risk.
"If it's not used correctly, could essentially lead to some of the greatest cyber risks that humans ever faced."
The underlying idea is that AI could scale hacking—making it faster, cheaper, and more systematic to find and weaponize weaknesses across widely used systems.
A major turning point in the narrative is Anthropic's decision to restrain distribution—even though broad release would likely be valuable and profitable.
"Anthropic came to the decision not to release this very valuable tool for them."
They chose restriction as a safety measure: fewer users, more control, less chance of misuse.
"Rather than letting this out into the world and potentially people using this for malicious intent, they said, we are going to limit the amount of people who can use it."
This frames Anthropic's move as a kind of self-imposed containment: acknowledging the power of the tool and trying to reduce the chance it becomes a "how-to guide" for attackers.
The video stresses that concern spreads beyond the US. It mentions central banks in Canada and the UK, underscoring that financial authorities globally are paying attention.
"The risks don't really stop in the US because we've also had the Central Banks of Canada, the UK as well."
Interestingly, they aren't only warning about exposure—they're also urging Anthropic to use the tool, presumably in controlled ways, to identify weaknesses before attackers do.
"Not just urging them about the risks that are exposed, but urging them to use the tool as much as possible."
So the dilemma becomes: the same capability that could enable attacks could also help defend—if used responsibly and under strict access controls.
One of the most unusual parts of the story is Anthropic's creation of Project Glasswing, described as a controlled program that shares limited access with a select group—including competitors—so they can test and respond to the vulnerabilities Mythos uncovers.
"Anthropic has created this project Glasswing, which essentially gives its technology in a limited form to some of its competitors."
The subtitles name major tech and security players involved:
"We know that Apple, Google, Palo Alto Networks, CrowdStrike, the Linux Foundation, Amazon, are among the 48 or so folks that are part of this Glasswing project."
The video emphasizes how rare this is in a competitive industry: firms that normally guard their intellectual property are now cooperating because the security implications are so serious.
"These are fierce competitors… Anthropic is not only showing them… some IP that hasn't been released yet, they're allowing their competitors to kind of feed back and potentially criticize some of the work that they did."
Another quote broadens the list and describes the cohort as intentionally small:
"We've opened this out to a number of organizations, including Microsoft, AWS, some financial institutions, a very small cohort…"
And the key discovery from this controlled sharing is grim: Mythos revealed that severe weaknesses are already out there.
"Mythos has really showed us that there are a lot of very severe vulnerabilities right now."
In simple terms: Mythos didn't just invent risk—it exposed how fragile parts of the current digital world may already be.
The video notes that banks already spend enormous sums on technology, and major AI companies want to work with them—partly because banks are large customers and partly because banks are high-value targets needing constant defense.
"Banks already spend vast amounts on their technology… and over the past couple of years, the major US banks have upped their spending on tech, partially in an effort to stay ahead of these future cyber threats."
But perhaps the most sobering point comes near the end: Anthropic suggests this isn't a one-off. The speed of AI progress means similar models will likely exist soon, whether built by rivals or bad actors.
"Similar models with similar capabilities will be available within the next 12 months."
And the video adds the obvious but chilling implication: people with malicious intent may be highly motivated to replicate this capability.
"It's highly likely that people with malicious intent would be very interested [in] creating a similar model."
That's why the closing tone is "wake-up call," and why both Washington and Wall Street are portrayed as reacting appropriately, not overreacting.
"It's been a huge wake up call, and that's why you're seeing things like Washington and Wall Street, quite rightly concerned about this."
Chronologically, the story moves from warning (AI cyber risk) to proof (Mythos demonstrating real capability), then to containment (limited release + Glasswing), and finally to urgency (regulators, banks, and global central banks preparing for what comes next). The lasting message is that Mythos may be a preview of a near future where AI can find and operationalize vulnerabilities at scale—and society has a short window to adapt.
Get instant summaries with Harvest