A year after introducing Gemini 1.0, today Google officially introduced their newest artificial intelligence model, namely Gemini 2.0. Gemini 2.0 offers multi-modal support, allowing direct creation of images and audio, and at the same time allowing the development of AI agents on top of it.
Through this AI agent, users can optimize the use of Gemini in carrying out specific tasks – especially involving companies and so on. Under this example of an AI agent, Google showcased Project Astra – which was demonstrated previously, allowing the phone's camera to be used to help you explain the user's surroundings, and provide appropriate responses accordingly.
Google says Gemini 2.0 will begin offering developers and testers today. Apart from that, Google will also introduce Gemini 2.0 integration through search and also Gemini itself.
The first model under Gemini 2.0 is called Gemini 2.0 Flash, which focuses on faster results, while maintaining multi-modal support. This model is said to operate at twice the speed of the previous Gemini 1.5 Flash. Developers can also use this new model not only in text, image and audio-based creation, but it also allows them to run code, perform searches, and integrate other APIs.
With this introduction, Google is seen as targeting in 2025 to start developing various AI agents on the basis of their artificial intelligence models, as well as adapting the use of artificial intelligence models.
For users who want to use it, you can test Gemini 2.0 Flash via the Gemini application. Users need to select this latest model in the menu, before they can enjoy this latest model of Google's artificial intelligence.