
KEY POINTS
- Google, Microsoft, and xAI have agreed to give the U.S. Commerce Department early access to unreleased AI models for security testing, joining OpenAI and Anthropic.
- Anthropic's Mythos model, which autonomously discovered thousands of zero-day vulnerabilities, accelerated the administration's push for pre-release AI reviews.
- Watch for whether voluntary testing becomes mandatory — any shift toward required pre-release reviews would reshape AI development timelines and valuations across the sector.
Google, Microsoft, and xAI have signed agreements to give the U.S. Commerce Department's Center for AI Standards and Innovation early access to their AI models before public release, CNBC reported. The three companies join OpenAI and Anthropic, which renegotiated their existing partnerships with the center to align with President Trump's AI Action Plan. Every major frontier AI lab in the United States is now participating in some form of government pre-release review.
The speed of this shift is remarkable. Twelve months ago, the Trump administration was dismantling Biden-era AI executive orders and signaling a hands-off approach to the technology. Now Washington is testing unreleased AI systems for cybersecurity vulnerabilities, biosecurity risks, infrastructure threats, and the ability to bypass safety guardrails.
The Mythos Catalyst
The pivot has a specific trigger: Anthropic's Claude Mythos model. Announced in late March after a data leak revealed its existence, Mythos demonstrated capabilities that forced the administration to reconsider its regulatory posture. The model autonomously identified thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser, including 271 bugs in Mozilla Firefox alone, some undiscovered for over 15 years.
Anthropic chose not to release Mythos publicly. Instead, it launched Project Glasswing, committing up to $100 million in usage credits to let critical infrastructure partners and open-source developers use the model to secure their systems before similar capabilities proliferate. Anthropic's own team estimates that other labs will develop comparable capabilities within six to eighteen months.
That timeline is what rattled Washington. If Mythos-class models become widely available, the offensive advantage in cybersecurity shifts dramatically. The Pentagon responded by signing agreements with seven major tech companies — Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX — to deploy their AI systems across classified networks.
What This Means for Tech Stocks
For investors, the question is whether voluntary pre-release testing evolves into something more prescriptive. The White House is consulting a group of experts to advise on a possible government review process for new AI models, which Fortune reported would represent a significant departure from the administration's original light-touch approach.
Mandatory pre-release reviews would add weeks or months to AI product launch timelines, directly affecting revenue schedules at companies like OpenAI, Google, Microsoft, and Anthropic. It could also create barriers to entry that favor incumbents — the five labs already participating have a head start in navigating the process.
Cybersecurity stocks have already responded. The awareness that AI can autonomously find and potentially exploit zero-day vulnerabilities at scale has driven fresh capital into companies offering defensive AI security tools. The sector is pricing in a world where the vulnerability discovery cycle compresses from months to hours.
The Road Ahead
The next inflection point is whether the expert advisory group recommends mandatory reviews and whether Congress takes up AI safety legislation this session. The bipartisan appetite for some form of AI oversight has grown since Mythos, but the specifics remain contested. Traders should watch the Commerce Department's first public reports on the models it has tested — any disclosed capability gaps or safety concerns could move individual stocks and the sector broadly. The voluntary framework holds for now, but the direction of travel is clear: frontier AI is no longer shipping without Washington looking under the hood first.

