- Google plans to bring advanced AI models (Gemini) to Android phones in 2025.
- Google’s Gemini Ultra boasts capabilities similar to OpenAI’s GPT-4 model.
Google announced its plans to integrate its advanced large language models (LLMs), known as Gemini, into Android smartphones starting in 2025. This follows the earlier introduction of Gemini Nano, a smaller version of the LLM, on Pixel and other compatible Android devices.
Unlike Nano, which requires an internet connection, these advanced models reside in data centers. Embedding them directly on devices would eliminate the need for constant internet access, improving user experience and potentially enhancing privacy.
Google’s Gemini Ultra boasts 1.56 trillion parameters, comparable to OpenAI’s GPT-4 model. This indicates strong capabilities in understanding and generating human-like language, potentially leading to new features and functionalities for Android users.
The smartphone market is experiencing slow sales, with some hoping for an “AI supercycle” to revitalize consumer interest. However, analysts remain cautious due to a lack of substantial innovation that would incentivize users to upgrade from their existing devices.
Despite the uncertain market outlook, several companies, including Google with its rebranded Bard (now Gemini) app, are investing heavily in AI-powered chatbots and virtual assistants. These advancements contribute to the goal of creating seamless AI agents that can assist users with various tasks within the Google ecosystem, as envisioned by Google CEO Sundar Pichai.
Integrating advanced AI models into Android phones in 2025 could potential shift towards a more intelligent and personalized mobile experience.