
Source: Meta.ai
Meta has released the first two models from its Llama 4 suite: Llama 4 Maverick and Llama 4 Scout. The Maverick model is designed to be a “workhorse” for general assistant and chat use cases, while Scout is geared more toward “multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast codebases.” The tech giant also introduced Llama 4 Behemoth, an upcoming model that it claims is one of the world’s smartest LLMs. Additionally, the company also mentioned an upcoming fourth model, Llama 4 Reasoning, to be released in a few weeks.
Many have been expecting Meta to respond to the “threat” posed by the rise of China’s DeepSeek, which reportedly performs on par with some of the top AI models, including Meta’s previous flagship Llama models while operating at a fraction of the cost. While these claims remain contested, DeepSeek has undeniably reshaped the AI landscape. It’s no surprise that Meta directly references comparisons with DeepSeek in its blog post introducing Llama 4.
Meta chose to announce the latest release well before the LlamaCon on April 29th. This gives developers plenty of time to download and experiment with the new models. Interestingly, the announcement was made on a Saturday – generally a quiet slot for most tech releases. When asked on Threads about the weekend release of Llama 4, Meta CEO Mark Zuckerberg simply responded, “That’s when it was ready.”

Source: Meta
Based on the specifications, the Llama 4 Maverick seems like a highly capable model. With 17 billion active parameters and a total of 400 billion parameters distributed across 128 experts, it utilizes a Mixture of Experts (MoE) architecture to maximize efficiency. It is designed for efficiency, supports multimodal tasks, and can be deployed on a single NVIDIA H100 DGX host.
Llama 4 Scout, on the other hand, offers 17 billion active parameters within a total of 109 billion parameters and 16 experts. Its standout feature is a 10 million token context window, enabling it to handle vast amounts of text or large documents effectively. Scout’s efficiency allows it to run on a single NVIDIA H100 GPU.
This is the first time MoE architecture has been used for the Llama models. Using this architecture makes training and answering queries more efficient by dividing tasks into smaller pieces and assigning them to specialized “expert” models that handle specific parts.
Both Maverick and Scout can now be downloaded from the Llama website and Hugging Face. Additionally, they have been integrated into Meta AI, making them accessible through platforms like WhatsApp, Messenger, and Instagram DMs.
“This is just the beginning for the Llama 4 collection,” stated Meta. “We believe that the most intelligent systems need to be capable of taking generalized actions, conversing naturally with humans, and working through challenging problems they haven’t seen before.”
“Giving Llama superpowers in these areas will lead to better products for people on our platforms and more opportunities for developers to innovate on the next big consumer and business use cases. We’re continuing to research and prototype both models and products, and we’ll share more about our vision at LlamaCon”
The upcoming Behemoth model offers more powerful hardware with 288 billion active parameters, 16 experts, and nearly 2 trillion total parameters. According to Meta’s internal benchmarking, Behemoth outperforms GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Pro on several evaluations measuring STEM skills.
Notably, none of the Llama 4 models function as full-fledged reasoning models like OpenAI’s o1 and o3-mini. Reasoning models are designed to fact-check their responses and provide more reliable answers, but they typically take longer to generate results compared to traditional, non-reasoning models.
Meta shared that it has fine-turned the Llama 4 models to adjust how the chatbots handle bias, specifically ”contentious sets of political or social topics”. This comes at a time when AI companies face pressure from some political figures, including Elon Musk and David Sacks, who argue that AI chatbots often lean toward certain ideologies. However, AI bias seems to be a persistent and deeply rooted issue, and may not be resolved completely anytime soon.
In a recent Instagram video, Zuckerberg said that the company’s “goal is to build the world’s leading AI, open source it, and make it universally accessible so that everyone in the world benefits. I’ve said for a while that I think open-source AI is going to become the leading model, and with Llama 4, that is starting to happen.”
Meta’s performance claims for the Llama 4 series are based on results from “a broad range of widely reported benchmarks.” Notably, Maverick secured the second spot on LMArena, a well-known benchmarking platform. However, the AI community has been talking about unverified reports suggesting that the Llama 4 model tested may have been “optimized” specifically for the benchmarks, potentially leading to inflated and misleading scores.
Ahmad Al-Dahle, VP of generative AI at Meta, has been quick to deny the rumors. “We’ve also heard claims that we trained on test sets — that’s simply not true and we would never do that,” shared Al-Dahle on his X account.

Source: Shutterstock
Al-Dahle did admit that some users are experiencing “mixed quality” from Maverick and Scout”. He attributed these issues to the early release of the models, stating, “Since we dropped the models as soon as they were ready, we expect it’ll take several days for all the public implementations to get dialed in.” Al-Dahle added that the team is actively addressing bug fixes and working on onboarding partners to improve the overall user experience.
Whether Meta tried to game the system or not, the widespread rumors have been enough to cast doubts on the reliability of benchmarks. These platforms have turned more into AI battlegrounds where companies compete for dominance, rather than providing objective performance evaluations.
Related