Llama Model 3.2 Launched With Capability To Understand Images And Run On Smartphones



The latest artificial intelligence (AI) model Llama 3.2 was announced by Meta this morning and it not only has a larger number of parameters but also has the ability to understand images and comes in a version that can be run directly on a mobile device.


Llama 3.2 comes in 90B, 11B, 3B and 1B versions. Llama 3.2 90B and 11B are the two most powerful models as they are trained using 90 billion parameters. It is a multimodal AI that. understand the context of user uploaded images.



For example, if a picture of the annual sales chart is shared, it can provide answers when the highest or lowest sales occur. Multimodal capabilities allow documents containing text and images to be processed more accurately.


Llama 3.2 3B and 1B are two models that only understand text at this point. Its small size allows it to be installed and run directly on mobile devices such as smartphones. Meta says both models are now supported on Qualcomm and MediaTek chips.

Previous Post Next Post

Contact Form