featured-image

Menlo Park, California: Meta Platforms has extended multimodal voice and vision support to its personal AI assistant Meta AI , which will now be able to talk to users and tell what it sees in pictures. “Voice is going to be a way more natural way of interacting with AI than text,” Mark Zuckerberg , chief executive of Meta, said while unveiling the voice mode, which features some of the iconic voices like Awkwafina, John Cena and Kristen Bell. Meta also announced a new version of its mixed reality headset called the Meta Quest 3S and unveiled the prototype of the world's first holographic augmented-reality glasses called Orion at the Meta Connect event at the company’s headquarters in Menlo Park, California on Wednesday.

For developers, the AI company launched Llama 3.2 version of language and vision models and its smallest model ever, which could run on mobile devices. Also Read | New AI tools, AR glasses and more: everything Meta announced at its Connect event Artificial Intelligence(AI) Java Programming with ChatGPT: Learn using Generative AI By - Metla Sudha Sekhar, Developer and Lead Instructor View Program Artificial Intelligence(AI) Basics of Generative AI : Unveiling Tomorrow's Innovations By - Metla Sudha Sekhar, Developer and Lead Instructor View Program Artificial Intelligence(AI) Generative AI for Dynamic Java Web Applications with ChatGPT By - Metla Sudha Sekhar, Developer and Lead Instructor View Program Artificial Intelligence(AI) Mastering C++ Fundamental.



Back to Fashion Page