Google CEO Sundar Pichai publicly congratulated Meta saying “never a dull day in the AI world!” on the launch of its new Llama 4 models, showcasing a sense of respect and healthy competition in the AI landscape.
Meta’s Llama 4: Scout, Maverick, and Behemoth
Meta unveiled two main models, Llama 4 Scout and Maverick, designed for multimodal tasks — capable of processing text, images, video, and audio. A more advanced version, nicknamed Behemoth, is still under training and aims to rival the most powerful models in the market.
Integration and Upcoming Event
These models are already being integrated into Meta’s AI assistant across its apps. Meta plans to share more updates at LlamaCon, scheduled for April 29.
Meta’s Llama 4: Scout, Maverick, and Behemoth
Meta has unveiled the Llama 4 family of AI models, aiming to compete with leading generative AI systems like OpenAI’s GPT-4 and Google’s Gemini. These models represent Meta’s most ambitious step yet in the AI race.
- Scout and Maverick are the two new versions of Llama 4 that are already live and powering Meta’s AI assistant across Facebook, Instagram, WhatsApp, Messenger, and Ray-Ban smart glasses. These models are multimodal, meaning they can understand and generate content across multiple formats—text, images, video, and audio. This makes them capable of more natural and intuitive user interactions.
- Behemoth, the code name for Meta’s most powerful version of Llama 4, is still in training. It is designed to surpass current market leaders in AI intelligence and capability. Once released, Behemoth could set a new benchmark for general-purpose AI models.
With Llama 4, Meta is emphasizing openness, integration across its platforms, and rapid iteration. These models are expected to be central to the company’s AI strategy, as it races to keep pace with OpenAI and Google in developing the next generation of artificial intelligence tools.

Curated by Gurdeep Singh : Source : https://news.google.com/











Leave a comment