On , Anthropic released Claude Mythos Preview to a restricted set of 40 organizations, none of which are you or anyone you know. That is intentional, and it is worth understanding why before getting to the model itself. Mythos is not a product launch in the conventional sense. It is a signal: that the capabilities race in frontier AI has quietly moved into territory with significant national security implications, and that Anthropic intends to be the company that defines what responsible deployment of those capabilities looks like.
The model is described by Anthropic as larger than the current Opus family, built with a particular emphasis on agentic reasoning and code analysis. In the context of cybersecurity, those two properties together are what matter. A model that can reason agentically through a codebase does not just flag suspicious lines. It traces the downstream consequences of a vulnerability, reasons about exploitability conditions, and proposes remediation without needing a human to prompt it at each step. Pair that with scale, and you have something that changes the economics of vulnerability research in ways the industry has not fully worked through yet.
The announced partner list for Mythos access reads like a who's who of infrastructure-scale technology: Amazon, Apple, and Microsoft are the top-line names, but the full coalition under Project Glasswing extends to Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, NVIDIA, and Palo Alto Networks, among others. The fact that this group is cooperating under a single initiative is itself news. These organizations compete aggressively. That they are sharing vulnerability discovery outputs under a coordinated framework indicates a shared assessment of the threat environment that outweighs their competitive interests.
What Claude Mythos Preview Actually Is
The naming convention here carries meaning. Mythos is a preview, not a release. Anthropic is not making this model available to the general public or even to its existing enterprise customers on request. The restricted access structure is a deliberate choice that reflects the dual-use reality of what Mythos can do: the same model that finds zero-day vulnerabilities to patch them can, in theory, find zero-day vulnerabilities to exploit them. Controlling who has access to that capability before the defensive infrastructure is in place is a reasonable precaution, not a marketing strategy.
What we know about the model's architecture is limited to what Anthropic has disclosed. It is larger than Opus, which places it in rarefied territory given that Opus is already among the highest-capability models available anywhere. The design emphasis is on two things. First, agentic reasoning: the ability to break down complex tasks into subtasks, execute them sequentially or in parallel, revise the plan when intermediate results change the picture, and arrive at conclusions that integrate multiple rounds of analysis. Second, code analysis: reading codebases at a depth and breadth that exceeds what was possible in prior generations of the technology.
Think of the difference this way. A traditional static analysis tool is like a security guard checking badges at the door. It knows what a valid badge looks like and what an invalid one looks like, and it applies that knowledge at scale. Claude Mythos is closer to a forensic investigator who reads the entire building's history, understands why the badge system was designed the way it was, identifies situations where a technically valid badge grants access it was never intended to grant, and then explains what needs to change and why. The analogy is imperfect, but the gap it describes is real.
The early results from Project Glasswing are the clearest indicator of what Mythos can actually do under real conditions. Thousands of zero-day vulnerabilities discovered, with many of them having existed in production systems for ten to twenty years. That number is extraordinary not because zero-days are rare to find in aggregate, but because of the age profile. Bugs that survived two decades of auditing are not hiding in obvious places. They are in architectural corners that require sustained contextual reasoning to recognize, exactly the kind of analysis that frontier agentic models are built to do.
The Dual-Use Problem Is Not Theoretical
Every serious conversation about AI and cybersecurity eventually arrives at the same uncomfortable point: the model that makes defense better makes offense better too. This is not a new problem. Network scanners, fuzzing tools, and penetration testing frameworks have all been dual-use since their invention. The difference with Mythos-class models is the magnitude of the asymmetry they introduce.
A human vulnerability researcher working for a well-funded team might find a handful of exploitable flaws in a given codebase over weeks of work. An AI model operating at the speed and scale Mythos appears to operate at could complete an equivalent analysis in hours and do it across a far larger attack surface simultaneously. If that capability is controlled by a responsible defensive partnership, the net effect on security is positive. If something equivalent falls into the hands of a state-sponsored threat actor or a sophisticated criminal group, the defensive establishment has a serious problem.
Anthropic is aware of this framing. The restricted preview structure and the decision to route Mythos's capabilities through a coordinated multi-organization initiative rather than a public API are both responses to it. But the restriction is temporal, not permanent. At some point, Mythos or something equivalent will be more broadly accessible, and the cybersecurity industry needs to have its defensive frameworks in place before that happens.
"The vulnerabilities we are surfacing include some that have existed in deployed systems for fifteen or twenty years. These are not trivial bugs. Many of them are critical severity. The question of why they were not found earlier is, itself, part of the research."
Anthropic Project Glasswing team, via TechCrunch
The current threat environment adds urgency to this question. Cyber retaliation activity has surged following recent geopolitical escalations, with state-sponsored groups actively scanning for unpatched infrastructure. The timing of Mythos's deployment is not coincidental. Finding and patching legacy vulnerabilities before adversaries can exploit them is a race, and Project Glasswing is Anthropic's opening move in it.
Meanwhile, the Chrome WebGPU zero-day disclosed in early April 2026 illustrates exactly what is at stake when the offensive side of this capability equation gets ahead of defense. A use-after-free in a relatively new browser API, actively exploited before a patch was available. The same class of vulnerability, in the wrong hands, with a Mythos-grade discovery engine behind it, represents a qualitatively different threat than what organizations have been planning for.
How This Positions Anthropic Against OpenAI and Google
The competitive context around Mythos is worth being specific about. Anthropic, OpenAI, and Google DeepMind are the three companies that operate at the frontier of large language model capability. All three are aware that cybersecurity is one of the highest-stakes application domains for their technology. None of the three wants to be positioned as the company whose model enabled a major attack.
| Company | Cybersecurity Initiative | Model Focus | Access Model |
|---|---|---|---|
| Anthropic | Project Glasswing (40 partner orgs) | Claude Mythos Preview (agentic, code analysis) | Restricted preview, invite-only |
| OpenAI | Cybersecurity grant program, safety bounty | GPT-5.4 and O-series reasoning models | API access with usage monitoring |
| Google DeepMind | Project Zero AI integration | Gemini Ultra, Gemma 4 open-source | Enterprise and research access |
Anthropic's move with Mythos is differentiated on two dimensions. First, the scale of the disclosed results: thousands of zero-days found and in remediation is a concrete outcome, not a research program. Second, the coalition structure: by involving Amazon, Apple, Microsoft, and others as active participants rather than customers, Anthropic has created a stakeholder network that gives it institutional credibility in a space where trust is the key currency.
OpenAI has been more cautious about specific cybersecurity claims at this capability level. Google DeepMind's integration with Project Zero is real but has been characterized as augmenting human researchers rather than running autonomous discovery at scale. Neither positioning is wrong, but they are different bets about how the industry will evolve. Anthropic is betting that demonstrating concrete defensive outcomes at scale, through a controlled and transparent partner program, is the right way to establish authority in this domain.
"What Mythos represents is not just a more capable model. It represents a different theory of how AI gets deployed in security-critical contexts. The restricted preview with named partners is a governance choice as much as a technical one."
Technology policy analyst commentary, via Fortune
What the Partner Coalition Tells You About Coverage
The 12 named active partners in Project Glasswing are not a random sample of the technology industry. They represent a deliberate coverage map designed to touch the majority of software infrastructure running globally. Understanding who is in the group and why reveals the strategic logic behind the initiative.
- Amazon Web Services: Cloud infrastructure, storage, and networking services running a significant fraction of global internet traffic.
- Apple: Consumer operating systems (macOS, iOS) and the hardware stack beneath them, covering billions of devices.
- Microsoft: Enterprise operating systems, productivity software, and Azure cloud infrastructure.
- Cisco: Network hardware and software, critical for enterprise and carrier-grade routing and switching infrastructure.
- CrowdStrike: Endpoint detection and response, positioned to act on discovered vulnerabilities with direct threat intelligence integration.
- Palo Alto Networks: Network security, firewall infrastructure, and threat intelligence across enterprise environments.
- NVIDIA: GPU firmware and driver stack, increasingly critical as AI workloads run on NVIDIA hardware in data centers worldwide.
- The Linux Foundation: Oversight of the open-source Linux kernel and related projects running on the majority of global server infrastructure.
- Google: Chrome, Android, Google Cloud Platform, and its own open-source contributions across the software stack.
- Broadcom: Semiconductor firmware and networking hardware components embedded in infrastructure globally.
- JPMorganChase: Financial services infrastructure, representing the sector with the highest regulatory exposure to cybersecurity incidents.
The Linux Foundation's presence is particularly significant. Linux underlies Android, a large share of cloud server infrastructure, and most embedded systems from routers to industrial control systems. Vulnerabilities in the kernel or in widely used Linux packages can propagate through this entire stack. Having the Foundation as an active participant means discovered flaws in Linux-based systems have a coordinated remediation path that reaches the open-source maintainer community, not just commercial vendors.
| Infrastructure Category | Glasswing Partner(s) | Approximate Global Reach |
|---|---|---|
| Consumer mobile OS | Apple, Google | Billions of devices (iOS + Android combined) |
| Enterprise OS and productivity | Microsoft | Dominant in enterprise desktop and server market |
| Cloud infrastructure | Amazon, Microsoft, Google | Majority share of global cloud workloads |
| Network hardware and software | Cisco, Broadcom, Palo Alto Networks | Carrier and enterprise backbone globally |
| Open-source server infrastructure | Linux Foundation | Majority of global web servers and cloud hosts |
| AI compute hardware | NVIDIA | Dominant in data center GPU deployments |
Industry Reaction: Cautious Optimism With Unresolved Questions
The security research community's initial response to the Mythos announcement has been a mix of genuine interest and pointed skepticism about things Anthropic has not disclosed. The "thousands of zero-days" claim is striking, but it needs context that the announcement does not fully provide. How are these vulnerabilities being classified? Is Anthropic counting every potential flaw flagged by the model, or only those that have been independently confirmed as exploitable by human researchers? What is the false positive rate?
These are fair questions, and Anthropic's decision to announce results before publishing detailed methodology is a tension point in the security community. The standard practice in responsible disclosure is to have verification and remediation timelines in place before public announcement. Project Glasswing's announcement implies that partner organizations are handling remediation, but the details of how that coordination works across 12 companies, each with its own disclosure processes and regulatory obligations, have not been made public.
"The model's ability to find bugs that have existed for twenty years is, if true at the scale claimed, the most significant development in offensive and defensive security tooling since the widespread adoption of automated fuzzing. The methodology disclosure will matter enormously for how the field responds."
Independent security researcher commentary, via The New York Times
The other unresolved question is about the model's offensive capability assessment. Anthropic has framed Mythos as a defensive tool through Project Glasswing, but the same code analysis and agentic reasoning capabilities that find vulnerabilities can be applied to writing exploits. This is not unique to Mythos. Every capable code model has this dual-use surface. What is different here is the scale and the claimed efficacy in finding real, previously unknown vulnerabilities. The cybersecurity industry will be watching closely to see how Anthropic handles requests for red team access to Mythos and whether the restricted preview model holds as competitive pressure builds.
What Happens Next
Several things are worth tracking in the coming months. The first is the remediation timeline. If thousands of zero-days have been found, the process of patching them, notifying affected vendors, coordinating disclosure, and pushing fixes to users is a significant operational undertaking. How Glasswing's partner network handles this at scale will be a test of the initiative's coordination infrastructure, not just the model's discovery capability.
The second is methodology publication. Anthropic will face pressure to publish technical details about how Mythos analyzes codebases, what kinds of vulnerabilities it is finding, and what the validation process looks like. The security research community runs on reproducibility and peer review. A closed-system initiative that reports results without disclosing method will not sustain credibility indefinitely, even with a compelling partner list.
The third is competitive response. OpenAI and Google DeepMind are not going to watch this announcement without response. The question is whether their response takes the form of similar partnership programs, direct challenges to Anthropic's methodology, or capabilities demonstrations of their own. The cybersecurity application domain is about to become one of the primary battlegrounds in the frontier model competition, and Mythos has fired the opening shot.
The deeper question is one that the cybersecurity industry has been circling for years: what happens to the economics of vulnerability research when AI can do at scale what previously required dozens of expert humans working for months? If defensive teams get this capability first and manage it responsibly, the net effect is a step-change improvement in software security. If the deployment races ahead of the governance frameworks, the same capability becomes a force multiplier for the most sophisticated offensive actors in the world. Mythos is the clearest version of that question the industry has faced yet. The answer is being written in real time.
Frequently Asked Questions
What is Claude Mythos Preview?
Claude Mythos Preview is Anthropic's largest AI model to date, built with a focus on agentic reasoning and code analysis. It was released on April 7, 2026 to a restricted set of 40 organizations for use in cybersecurity research, specifically through Project Glasswing. It is not publicly available and access is by invitation only.
How many zero-day vulnerabilities has Mythos found?
Anthropic has disclosed that Mythos has surfaced thousands of zero-day vulnerabilities through Project Glasswing, with many of them having existed in deployed systems for ten to twenty years. The company has not published a precise number or detailed methodology for how these are being classified and validated.
Which companies have access to Claude Mythos Preview?
Forty organizations have received preview access. Twelve of these are named active participants in Project Glasswing: Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and one additional partner. The remaining 28 organizations have not been publicly identified.
Is Claude Mythos the same as Project Glasswing?
No. Claude Mythos Preview is the AI model. Project Glasswing is the coordinated cybersecurity initiative that uses the model. Glasswing provides the organizational framework, partner coordination, and defensive mission. Mythos provides the analytical capability. They are related but distinct.
Can Mythos be used for offensive hacking as well as defense?
The same code analysis and agentic reasoning capabilities that make Mythos effective at finding vulnerabilities for defensive purposes could theoretically be applied to developing exploits. This is the dual-use dilemma at the center of AI cybersecurity. Anthropic's restricted preview structure is designed to limit this risk, but it is an open question in the security community how the governance model will hold as the technology becomes more broadly deployed.













