
Apple and Google have signed a formal agreement that allows Apple to test a Google-designed Gemini model as part of its work on a smarter, web-aware Siri. This is not a full integration deal. It’s an evaluation phase inside Apple’s broader plan to give Siri “world knowledge” answers, pulling fresh information from the web and summarising it for users.
Why the timing matters
Reporting from Bloomberg says Apple is building an AI-powered web-search layer for Siri, internally called World Knowledge Answers, targeted for next year. The goal is to combine Siri’s on-device smarts with a search-grade model that can synthesise live web results.
The move arrives just after a major US antitrust ruling on Google. The court did not break up Chrome or Android and did not ban Google’s lucrative default-search payments to Apple, though it ordered data-sharing and curbed exclusivity. That outcome preserves the economics of the Apple-Google search relationship, which helps explain why both sides are comfortable proceeding with a limited Siri test now.
The curious choice: why Google?
At first glance, Apple leaning on Google is ironic. Apple Intelligence launched with big promises but underdelivered on the single thing users wanted most: a more capable, consistently helpful Siri. Apple now wants a web-fluent assistant quickly. For that, Google brings two assets that matter: a colossal, constantly updated index of the web and models trained to summarise it at scale. According to Bloomberg and The Verge, the Gemini variant under evaluation could run on Apple-controlled servers, aligning with Apple’s privacy posture.
9to5Mac’s read mirrors this nuance: no marriage yet. Apple is testing its own models and has evaluated OpenAI and Anthropic for other Siri components. Perplexity, once floated in Apple’s orbit, is reportedly less central now.
Has Apple fallen behind on AI?
Short answer: yes, especially on assistant usefulness. Apple Intelligence introduced writing tools, image fun, and some context features, but the true Siri overhaul slipped into the future. That’s why World Knowledge Answers is significant. It is Apple’s attempt to make Siri feel current, pulling in text, images, videos, and local points of interest.
But why Google when OpenAI and others exist?
Gemini isn’t the only capable model. The field includes OpenAI’s GPT-4o and GPT-5-class models, Anthropic’s Claude 3.5/4 family, Meta’s Llama line, Mistral’s Large series, xAI’s Grok, Cohere’s Command models, Alibaba’s Qwen, and more. So why test Google first? Because web answers at global scale require not just a strong model but also real-time retrieval across an immense index. Google’s unique advantage is pairing both reliably. That, plus the prospect of running a custom Gemini under Apple’s privacy guardrails, makes Google an obvious evaluation partner even if Apple keeps a “multi-model” strategy.
Does this link back to the Apple–Google search deal?
Not directly in contract terms, but very much in incentives. The remedies ruling appears to keep Google’s default-search payments to Apple intact, which stabilises a key Apple services revenue stream. A smarter Siri that answers more queries in-line can still funnel plenty of traffic to Google properties, while Apple delivers a better experience. Pragmatic, not romantic.
Discover more from Techish Kenya
Subscribe to get the latest posts sent to your email.