To compete in the AI era, Google’s mobile platform needs a makeover.
AI gadgets, which we hoped would liberate us from our phones, have turned out disappointingly half-baked. Any illusions that the Humane AI pin or the Rabbit R1 would provide a soothing balm for the constant friction of dealing with our personal tech have evaporated. The season of hot gadgets has ended, and the season of developers is upon us, starting with Google I/O this coming Tuesday.
Gemini’s true utility will emerge when it can integrate more seamlessly across the Android ecosystem
This period also marks a crucial juncture for Android. The recent major reorganization has brought the Android team and Google’s hardware team together for the first time. The mandate is clear: to charge forward at full speed and infuse more AI into more things. Android’s foundational principle was not to favor Google’s own products, but that model began to shift years ago as hardware and software teams started collaborating more closely. Now, the barrier has disappeared, and the AI era has arrived. And if the past 12 months serve as any indication, it’s going to be somewhat chaotic.
So far, despite the best efforts of Samsung and Google, AI on smartphones has amounted to little more than a few gimmicks. You can transform a picture of a lamp into a different lamp, summarize meeting notes with varying success, and circle something on your screen to search for it. These features are convenient, sure, but they fall short of a unified vision of our AI future. However, Android holds the key to one crucial door that could bring more of these features together: Gemini.
Gemini, an AI-powered alternative to the standard Google Assistant, launched just over three months ago, and it didn’t feel quite ready at the outset. On its first day, it couldn’t access your calendar or set a reminder, which wasn’t very helpful. Google has since added those functions, but it still doesn’t support third-party media apps like Spotify, which Google Assistant has supported for most of the past decade.
But the more I revisit Gemini, the more I see how it’s going to transform how I use my phone. It can remember a dinner recipe and guide me through the steps as I cook. It can understand when I’m asking the wrong question and provide the answer to the one I’m actually seeking (figs are the fruit that contain dead wasp parts, not dates, as I discovered). It can even identify which Paw Patrol toy I’m holding.
Gemini’s true utility will emerge when it can integrate more seamlessly across the Android ecosystem; when it’s built into your earbuds, your watch, and the operating system itself.
Android’s success in the AI era hinges on these integrations. ChatGPT can’t read your emails or your calendars as readily as Gemini; it doesn’t have easy access to a history of every place you’ve visited in the past decade. These are significant advantages, and Google needs every advantage it can get right now. We’ve seen plenty of hints that Apple plans to unveil a much smarter Siri at WWDC this year. Microsoft and OpenAI aren’t standing still either. Google needs to leverage its advantages to deliver AI that’s more than just a gimmick — even if it feels a bit un-Android-like.