That story is finally changing. And the way Apple is changing it tells you a great deal about where the company stands in the AI race, how it is choosing to compete, and what it means for investors watching the broader ecosystem.

On January 12th, Apple and Google announced a multi-year collaboration that will put Google's Gemini models at the core of the next generation of Siri and Apple Intelligence. The deal is not a minor tweak. It is a structural replacement of the reasoning layer that powers Apple's most visible AI product, and it is expected to roll out to users as part of iOS 26.5, with the first developer beta expected by the end of March.

What Is Actually Changing

The old Siri operated on a relatively simple architecture. Parse a voice command into a structured intent, match it to a supported action, execute. It was sophisticated pattern matching dressed up as intelligence, and anyone who used it regularly knew the difference.

The Gemini-powered Siri is designed around a fundamentally different foundation. Apple confirmed that Gemini handles the complex reasoning, multi-step planning, and natural language understanding that happens in the background, while Apple retains control over the user interface, data routing, and privacy enforcement. From the user's perspective it still looks like Siri. Under the hood, the cognitive work is being done by one of the most capable models in the world.

The practical implications are significant. The new Siri will be capable of on-screen context awareness, meaning it can read and reference whatever is currently displayed on your device. If a restaurant is open in Safari, Siri can make the reservation without the user copying the address. If a flight confirmation email is open, Siri can add it to the calendar and set reminders automatically. It can chain up to ten sequential actions from a single natural language request, executing the full workflow rather than stopping to ask for confirmation at each step. This is a category shift, not a feature update.

A more ambitious version, codenamed Campos internally, is planned for iOS 27 and WWDC 2026. That iteration is expected to run on an even more advanced model and support sustained, multi-turn conversations comparable to ChatGPT or Gemini's own consumer interface. According to Bloomberg's Mark Gurman, Apple and Google are even exploring running this version directly on Google's servers rather than Apple's Private Cloud Compute infrastructure, which would represent a significant deepening of the partnership.

Why Apple Chose Google Over Everyone Else

The choice of Google as the partner is itself a story worth examining.

Apple reportedly ran an internal evaluation comparing models from multiple providers before making its decision. Anthropic's Claude was considered technically superior in some dimensions, but the pricing was a dealbreaker. Anthropic was asking for over $1.5 billion annually, a figure that would reshape the economics of any service even at Apple's scale. OpenAI presented a different kind of problem entirely: the company was actively recruiting Apple engineers and developing its own hardware ambitions with former Apple designer Jony Ive, making a deep partnership strategically uncomfortable.

Google offered favorable financial terms and a model that had improved dramatically through 2025. There was also an existing commercial relationship to build on. Google already pays Apple approximately $20 billion annually to be the default search engine on iPhone. Adding Gemini to that arrangement deepens a partnership that both companies now have structural incentives to maintain.

The Gemini deal is reported to cost Apple approximately $1 billion per year, a fraction of what Anthropic wanted and modest relative to what Google spends on the same partnership from the other side. For Apple, it is essentially buying world-class AI capability without having to build or maintain the underlying model infrastructure.

The Investor Case for Apple's Approach

While every other major tech company is in a race to spend their way to AI dominance, Apple is running a fundamentally different playbook. The combined 2026 AI capital expenditure across the major hyperscalers is projected at somewhere between $600 billion and $700 billion. Amazon alone plans $200 billion in capex. Alphabet is projecting $175 billion to $185 billion. Meta is targeting $115 billion to $135 billion. Apple's fiscal 2025 capex was $12.7 billion, less than 10% of what Alphabet is planning to spend in a single year.

Apple's bet is that foundational AI models will eventually commoditize, that ownership of the distribution layer matters more than ownership of the model, and that hardware-software integration at the device level creates a more defensible moat than raw compute spending. If that thesis is right, Apple will emerge from the current AI buildout cycle with its balance sheet intact, its margins healthy, and a product that is competitive with the best in the market through partnerships rather than through internal R&D spending measured in the hundreds of billions.

There is already evidence the approach is working commercially. AI applications generated nearly $900 million for Apple in 2025, with ChatGPT alone responsible for the majority of that figure. The company was positioned to surpass a billion dollars in AI-related revenue in 2026 before the Gemini deal even shipped, simply by collecting its App Store percentage on AI subscriptions purchased on iOS. Apple profits from the AI revolution without having to fund the AI revolution. It is an extraordinarily capital-efficient position.

The risk, of course, is that Apple's model-agnostic approach leaves it permanently behind on the leading edge. If Gemini or any other partner model develops in directions that are incompatible with Apple's privacy architecture or product philosophy, the company could find itself locked out of capabilities that define the next competitive cycle. And Apple's own model development efforts, codenamed Ferret-3 and planned for 2026 and 2027, remain unproven against the frontier.

What This Means for the Broader AI Map

The Apple and Google deal reshapes the competitive landscape in a way that extends well beyond either company's product roadmap.

For Google, landing Gemini as the AI engine inside 2.2 billion active Apple devices is a distribution win of historic proportions. It validates Gemini as a platform-grade technology, not just a consumer chatbot, and gives Google a presence on iOS that goes far deeper than any search deal. The arrangement also gives Google leverage in its ongoing battle with OpenAI for developer and enterprise mindshare.

For OpenAI, it is a direct signal of how competitive the landscape has become. The company that popularized the modern AI assistant has now been passed over by the world's largest hardware platform in favor of a rival that was a distant second in the public consciousness just two years ago.

For Siri specifically, the stakes are clear. The Gemini-powered version arriving in iOS 26.5 will be the most important update to Apple's assistant in its entire fourteen-year history. If it delivers on what the partnership promises, it moves Apple from laggard to legitimate competitor in the AI assistant market almost overnight. If it underdelivers again, the patience of Apple users, investors, and developers will finally run out.

The keys to Siri have been handed to Google. Now comes the part where we find out if it was worth the wait.

Keep Reading