A year after introducing Gemini 1.0, today Google officially introduced their latest artificial intelligence model, Gemini 2.0. Gemini 2.0 offers multi-modal support, allows for direct generation of images and audio, and at the same time allows for the development of AI agents on top of it.
Through this AI agent, users can optimize the use of Gemini in performing a specific task – especially involving enterprises and the like. Under the example of this AI agent, Google shows Project Astra – which has been shown before, allows the phone camera to be used to help you describe the user's environment, and provide appropriate responses to it.
Google says that the Gemini 2.0 offering will begin today to a number of developers and testers. In addition, Google will also introduce Gemini 2.0 integration through search and also Gemini itself.
The first model under Gemini 2.0 is called Gemini 2.0 Flash, which focuses on faster responses, in addition to maintaining multi-modal support. This model is said to operate twice as fast as the previous Gemini 1.5 Flash. Developers can also use this new model not only in text-based, image and audio generation, but also allows it to run code, perform searches, and integrate other APIs.
With this introduction, Google is seen targeting 2025 to begin developing various AI agents based on their artificial intelligence model, while also further adapting the use of artificial intelligence models.
For users who want to use it, you can test Gemini 2.0 Flash through the Gemini application. Users need to select this latest model in the menu section, before being able to enjoy this latest Google artificial intelligence model.