OpenAI today held a special announcement event, introducing several new things involving their artificial intelligence-based offering. Being the main focus is the introduction of a new model under them, named GPT-4o.
GPT-4o uses "o' for "Omni" abbreviation. This new model not only works faster with a better interaction rate, but also improves the capabilities of the model through text, audio and even video and images.
With this step, users can ask questions in text, including audio or video – and the model will interact better. Users can also interrupt it in the middle of a conversation, and it will take into account various things, including emotions or facial expressions if asking questions in the form of a video. Users can also upload any photo, and ask various questions about the photo – and receive a response many times better than before.
This new model will take various contexts in giving an interaction or reply, and take into account various things in giving an input – unlike before which was based only on a single prom.
Through this offering, not only can it be used as an audio or video-based virtual assistant better than other competitors, but it can also be used as a translator, as well as various other methods of use. OpenAI itself states that the GPT-4o model works 2 times faster than the previous GPT-4Turbo.
OpenAI will introduce GPT-4o to all ChatGPT Plus users first, and will also offer limited access to ChatGPT free version users later.