Google Gemini 2.0 Now Available for Testing in Gemini Android Apps



Last week, Google announced Gemini 2.0, their new multi-modal language model. It now allows for direct generation of images and audio, and at the same time allows for the development of AI agents on top of it. The first model under it is Gemini 2.0 Flash and it is now accessible to general users via the Android app.


In the Google app update beta version 15.50, it now gives users of the Gemini app on Android devices the option to choose which language model they want to use. It is no longer limited to one. In this update, I found three model options, namely 1.5 Pro (for complex tasks), 1.5 Flash (for daily, quick help), and 2.0 Flash Experimental (experimental model testing).


In my attempt, when choosing 2.0 Flash Experimental it will give a notice that this model may not work as expected. According to Google, this 2.0 Flash model is 2x faster than 1.5 Pro. My comparison asking the question Who is Thecekodok and who is its founder? shows that 2.0 Flash is about 1.5 seconds faster than 1.5 Pro. Not only that, this model also generates more useful information than the Pro model which only provides summary information.


I also compared it with the 1.5 Flash model and it also has approximately the same performance, that is, the 2.0 Flash model is about 1.5 seconds faster. For now, the use of the new model is limited to the voice command feature, uploading pictures and chatting only. The file upload feature is still not supported. Support for iOS applications is still unknown. In the future, Gemini 2.0 can also run code, perform searches, and integrate other APIs.

Previous Post Next Post

Contact Form