This website uses cookies

Read our Privacy policy and Terms of use for more information.

KEY POINTS

- Google DeepMind, Microsoft, and xAI agreed to let the U.S. government evaluate AI models before public release, joining existing pacts with OpenAI and Anthropic.

- Anthropic's Mythos model — which identified thousands of zero-day vulnerabilities — triggered the Trump administration's reversal from light-touch AI policy to active pre-release vetting.

- The White House is weighing an executive order to formalize mandatory pre-release AI model reviews, a decision that could reshape deployment timelines across the industry.

The Trump administration's light-touch approach to artificial intelligence regulation ended last week. On May 5, the Center for AI Standards and Innovation announced that Google DeepMind, Microsoft, and Elon Musk's xAI signed agreements allowing the U.S. government to evaluate their AI models before public release. OpenAI and Anthropic, which had existing CAISI partnerships from 2024, renegotiated their terms to align with the priorities in President Trump's AI Action Plan. The center has already completed more than 40 evaluations, including on frontier models not yet available to the public.

The catalyst for this policy shift has a name: Mythos. Anthropic's breakthrough model, previewed in April as part of a cybersecurity initiative called Project Glasswing, demonstrated an ability to identify and exploit software vulnerabilities at a scale that alarmed national security officials. According to CNBC, Mythos discovered "thousands of zero-day vulnerabilities, many of them critical," over a matter of weeks. Anthropic limited the rollout to a select group of companies, but the damage to the administration's deregulatory stance was done.

From Light Touch to Heavy Hand

Vice President Vance and Treasury Secretary Bessent had reportedly questioned tech executives about AI security risks as early as April 10, before Mythos was publicly known. But the administration's formal pivot came fast. Fortune reported that the White House is now weighing an executive order that would give the federal government a formal role in vetting all new AI models before they reach the market. The order would create a working group of tech executives and government officials to design the oversight process — a framework that looks remarkably similar to the Biden-era AI safety proposals this administration previously rejected.

CAISI evaluations involve developers handing over model versions with safety guardrails stripped back so the center can probe for national security risks. That level of access represents a significant concession from companies that have fought to keep model weights and training details proprietary. For the labs, the calculus is straightforward: cooperate now on terms you can influence, or face mandatory rules later that you cannot.

The Pentagon Factor

The defense establishment is moving in parallel. The Pentagon has entered agreements with SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft, and Amazon Web Services to deploy AI capabilities on classified U.S. defense networks. The military's interest in frontier AI creates a dual dynamic: the same models being evaluated for public safety are being integrated into national security infrastructure, which gives the government both the motive and the leverage to demand pre-release access.

Anthropic's position is particularly complex. The company was labeled a national security concern by the administration after refusing to grant the Pentagon unrestricted use of its technology — a designation Anthropic is now challenging in court. The legal battle will set precedent for how much control AI developers retain over deployment decisions when government interests are involved.

What Traders Should Watch

For investors in AI infrastructure and the labs themselves, the regulatory landscape just shifted. Mandatory pre-release vetting would add weeks or months to deployment timelines, potentially slowing the revenue ramp for companies racing to monetize frontier models. It could also create a barrier to entry that favors incumbents with the resources to navigate government review processes. The executive order decision is expected before the end of May. If it lands, every AI company's go-to-market calendar changes.

Keep Reading