ImageBind AI Model By Meta Incorporates Six Types Of Data Like Human Senses

 


Humans usually use all five to learn something new. Artificial intelligence (AI) on the other hand is trained using only one set of data. For example AI LLM uses data sets in text form. Meta today announced ImageBind's new open source AI model that uses six types of data.



Data in the form of visuals, text, audio, temperature, movement and depth are used by ImageBind. This according to Meta makes it the first AI model to combine six types of data for training. Meta still has no plans for how ImageBind can be used in the real world.



In theory it can be used for generative video generation using text. In addition to generating video, Imagebind can also generate audio that matches the produced visuals. In the future, more types of data can be used to train Imagebind for example smell and touch sensors. This will allow him to have the ability to learn like a human.

Previous Post Next Post

Contact Form