The Inner Detail

Home » Technology » Google » Google I/O 2025: 15 AI Tools and Features announced by Google

Google I/O 2025: 15 AI Tools and Features announced by Google

Tools announced in Google I/O 2025

Google I/O 2025 was heavily focused on artificial intelligence, unveiling a suite of new tools, features, and services powered by Gemini and other advanced AI models. These announcements span various products, from search and development tools to communication and creative platforms, showcasing Google’s commitment to integrating AI across its ecosystem and making it more accessible and powerful for users and developers alike. Here’s a look at some of the major highlights from the event.

15 AI Tools / Features announced in Google I/O 2025

1. Stitch – AI tool for UI Design

Stitch is a new experimental AI-powered tool from Google Labs designed to streamline UI design and frontend code generation. It leverages the multimodal capabilities of Gemini 2.5 Pro to quickly turn natural language descriptions or image inputs, such as sketches or screenshots, into complex UI designs and corresponding frontend code.

Stitch supports rapid iteration by allowing users to generate multiple design variants and experiment with layouts, components, and styles. It facilitates a seamless transition to development by enabling designs to be pasted directly into Figma for further refinement or exporting clean, functional frontend code (like CSS/HTML). Stitch was announced at Google I/O 2025 and is positioned as a tool to help unlock the magic of app creation for everyone.


2. Gemini Live

Gemini Live is a new capability from Google that allows users to have natural, free-flowing conversations with the Gemini AI model about what they see. This multimodal feature enables interaction by using a phone’s camera or by sharing the device’s screen, creating a dynamic conversational experience. It is rolling out to Gemini Advanced subscribers on Android devices and is available to more Gemini app users, including those on Pixel 9 and Samsung Galaxy S25 devices, with plans to bring it to iOS. Gemini Live supports conversations in over 45 languages.

Users can utilize Gemini Live for various tasks. Pointing the camera allows for real-time assistance with things like organizing cluttered spaces or troubleshooting problems. Screen sharing enables users to get personal shopping advice while browsing online retailers or receive feedback on creative work like photos or blog posts. The AI provides real-time advice and visual guidance during these interactions.

The feature is designed for seamless use, allowing users to interrupt Gemini, pause or stop sharing, and dynamically switch between camera views or the screen.


3. AI Ultra Plan

Google introduced a new premium AI subscription plan called AI Ultra, priced at $249.99 per month. This tier provides access to Google’s most advanced AI models and offers the highest usage limits across various Google AI apps, including Gemini, NotebookLM, Whisk, and the new AI video generation tool, Flow. AI Ultra subscribers can also experience Gemini 2.5 Pro’s new enhanced reasoning mode, Deep Think, designed for handling highly complex math and coding tasks. Early access to Gemini in Chrome, allowing tasks and information summarization directly within the browser, is also included. The plan bundles an individual YouTube Premium subscription and up to 30TB of storage across Google services. It is available in the US now and rolling out to more countries soon.


4. AI Mode in Search

Google announced the rollout of its AI Mode to all Google users in the US, positioning it as the future of Google Search. This new tab brings a Gemini- or ChatGPT-style chatbot directly into the web search experience. Beyond just finding links, AI Mode allows users to quickly surface information, ask follow-up questions, and synthesize data in ways not possible with traditional search. It incorporates features like Deep Search for synthesizing information on large topics and Project Mariner, which enables the AI to click around the web and perform tasks like booking travel or finding deals. AI Mode can also access past search queries and, optionally, user emails and other Google apps for added context. The goal is a more flexible, personalized, and agentic Search experience.


5. Google Astra AI

Astra AI

Project Astra was highlighted as a testing ground for Google’s ambitions for a universal AI assistant. Described as a “concept car” by Google DeepMind research director Greg Wayne, Astra is a multimodal, all-seeing AI prototype. While not a consumer product currently available to the public, it represents Google’s most ambitious ideas about AI’s future capabilities. The project explores how AI can understand and interact with the world through vision and conversation, enabling seamless, dynamic interactions. The core idea is to create an assistant intelligent enough to integrate into users’ lives effectively without being disruptive, requiring a significant leap in AI intelligence to become truly useful in a universal capacity.


6. Imagen 4

Google unveiled Imagen 4, their latest state-of-the-art image generation model. According to Eli Collins, VP of product at Google DeepMind, Imagen 4 combines speed with precision to create stunning images with remarkable clarity in fine details such as intricate fabrics, water droplets, and animal fur. It excels in both photorealistic and abstract styles. A key improvement mentioned is its ability to produce more coherent and real-looking text within generated images, addressing a common issue in previous models. Imagen 4 is available in the Gemini app, Whisk, and Vertex AI as of May 20th, and will also be integrated into Google Workspace apps like Slides, Vids, and Docs. A faster variant, reportedly up to 10 times quicker than Imagen 3, is planned for release soon.


7. Automatic Password Change in Chrome

Google announced a new feature for Chrome’s password manager that will allow it to automatically change weak or compromised passwords. When Chrome detects a password that has been compromised during sign-in, Google Password Manager will prompt the user and offer the option to fix it automatically. For supported websites, Chrome can generate a strong, unique replacement password and automatically update it for the user. This feature aims to improve both user safety and usability, as manually changing passwords can be annoying, which often prevents people from doing it even when alerted to a weak one. The feature was announced at I/O to allow developers time to prepare their websites and apps before its launch later in the year.


8. Google Flow for Making AI Videos

Google Flow for making AI videos

Google introduced Flow, a new tool designed to make creating AI-generated videos easier. Flow is being launched alongside Google’s new Veo 3 video generation model, updates to the Veo 2 model, and the Imagen 4 image generation model. With Flow, users can generate eight-second AI-powered clips using prompts like text-to-video or ingredients-to-video, where images are used alongside a prompt to guide the model. Flow also includes “scenebuilder” tools that allow users to stitch multiple generated clips together. The tool is available starting today in the US for subscribers of the Google AI Pro and Google AI Ultra plans, with different tiers offering varying usage limits and access to advanced models like Veo 3 with native audio generation.


9. Agent Mode in Gemini

A significant announcement was the introduction of an ‘Agent Mode’ into the Gemini app. This experimental feature, coming soon to subscribers, allows users to delegate complex tasks and planning to Gemini, which will then carry them out autonomously. As an example, Google explained that a user looking for an apartment could give Gemini the criteria, and the AI could find listings, schedule tours, and create side-by-side comparisons without direct human intervention for each step. This capability represents a shift towards more “agentic” AI that can handle autonomous task handling. Gemini’s AI agent tool, Project Mariner, which can search the web and handle online tasks, has also been updated and can now oversee up to 10 simultaneous tasks.


10. AI Tool for Coding – Jules

Google announced that Jules, their asynchronous AI coding agent, is now available in public beta to everyone, worldwide, wherever the Gemini model is accessible. Initially introduced in Google Labs in December, Jules is designed to go beyond being just a co-pilot or code-completion tool; it is an autonomous agent that can read code, understand developer intent, and perform tasks independently. Users can submit a task, and Jules will handle it, such as fixing bugs, making updates, tackling a backlog, or even taking a first cut at building a new feature. Jules integrates directly with GitHub, cloning repositories to a Cloud VM and creating Pull Requests for developers to merge. It offers five free tasks a day in the beta version.


11. Google Beam

Project Starline, Google’s experimental 3D video conferencing technology, has been rebranded and is launching as a new product called Google Beam. Announced at Google I/O 2025, Beam aims to transform traditional 2D video calls into more realistic 3D experiences, giving users the sense of being together in person. It utilizes AI, special light field displays, and multiple cameras to create a live, detailed 3D digital copy of the user. Google has managed to shrink the technology and expects initial devices to be priced comparably to existing videoconference systems. Beam will also support real-time translation features that Google is adding to Google Meet, allowing participants to speak in their native language with AI providing on-the-fly translation. Beam devices for early adopters are expected later in 2025.


12. Virtual Try-on by Google

Based on the provided sources, there is mention of Google allowing users to virtually ‘try on’ clothes with AI. However, the sources available do not contain further details about how this feature works, when it will be available, or which platforms it will be integrated with. Therefore, a comprehensive description of this specific announcement cannot be provided using the information from these sources alone. It appears to be one of several AI-powered features mentioned at Google I/O 2025 aimed at enhancing user experiences, potentially within areas like Google Search or Shopping, leveraging AI’s capabilities for visual applications.


13. Google Meet Real-time Translation

Google announced a new feature for Google Meet that will allow the service to translate what you say into other languages in real time. This capability leverages AI to provide real-time translation during video calls. While specific details about the implementation and supported languages in Meet were not extensively detailed in the provided excerpts solely focused on this feature, one source mentions that Google Beam, the rebranded Project Starline, will support the same real-time translation features being added to Meet. This means participants in a Beam call could speak in their native language, and AI would provide a real-time translation, with the regular audio volume lowering. This feature aims to break down language barriers in virtual communication.


14. Stylish Android XR Smart Glasses – Project Aura

Google at I/O 2025 signaled a new era for its smart glasses ambitions, focusing on style and wearability. The company announced partnerships with eyewear brands Warby Parker and Gentle Monster to create smart glasses for its Android XR platform that people would actually want to wear. This includes a strategic partnership with Xreal for a new Android XR device called Project Aura. Described as immersive smart glasses or an “optical see-through XR” device, Project Aura is the second Android XR device announced after Samsung’s Project Moohan. Renders suggest Project Aura will look like normal sunglasses, featuring cameras in the hinges and nose bridge. This collaboration with fashion-forward brands indicates Google is prioritizing design to make smart glasses more appealing for everyday use.


15. AI-Powered Gmail Smart Replies with Context

Google announced significant improvements to Gmail’s smart replies, leveraging AI, specifically Gemini. The key upgrade is the ability for smart replies to pull information and context from your Gmail inbox and Google Drive. This means the AI can now incorporate details beyond the current email thread. The feature will also be enhanced to better match your personal tone and writing style. Building upon a previous update that allowed for longer, more contextual responses within a thread, these changes allow smart replies to theoretically include “a lot more context” from your various Google services. This aims to make suggested replies more accurate, personalized, and helpful, going beyond simple generic responses. The announcement was made at Google I/O 2025.


Here is a table summarizing the 15 tools and features you listed from Google I/O 2025..

Hope the page was useful to you!


(For more such interesting informationaltechnology and innovation stuffs, keep reading The Inner Detail).

Kindly add ‘The Inner Detail’ to your Google News Feed by following us!

1 thought on “<strong>Google I/O 2025: 15 AI Tools and Features announced by Google</strong>”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top