Google has released Gemma 4, the latest iteration of its open-source lightweight language model family, and with it comes a licensing change that has been a long time coming. Gemma 4 is released under the Apache 2.0 license, moving away from the restrictive custom license that governed Gemma 3. For the open-source AI community, this is a significant moment — it removes friction for commercial use, enables broader redistribution, and positions Gemma 4 as a truly open model in a landscape where “open” often comes with fine print.
The Gemma family has grown rapidly since its launch. Google reports that Gemma models have now been downloaded over 400 million times — a remarkable figure for a model family that doesn’t include the largest or most powerful models available. The combination of a relatively small footprint and permissive licensing has made Gemma popular among developers who want to run models locally, fine-tune them for specific domains, or build commercial products without negotiating a license with a model provider.
Gemma 4: What’s New
Gemma 4 comes in four model sizes: 2B, 4B, 26B MoE (Mixture of Experts), and 31B. This range gives developers flexibility to choose a model that fits their hardware constraints and performance requirements. The 26B MoE variant is particularly interesting as it delivers higher capability with a smaller active parameter count, making it more efficient for certain tasks.
One of the standout features of Gemma 4 is its performance on leaderboards. It currently ranks #3 on the Arena AI text leaderboard among open models — a testament to how far small, openly available models have come. Built from research originating in Google’s Gemini 3, Gemma 4 inherits architectural innovations and training techniques from Google’s most advanced models while being designed to run on consumer-grade hardware.
The model supports over 140 languages, making it one of the most multilingual open models available. It also includes native support for agentic workflows, function calling, and a massive 256K context window — large enough to process entire books or large codebases in a single pass.
The Licensing Shift: Why Apache 2.0 Matters
Gemma 3 used a custom Google license that, while permissive for many use cases, imposed restrictions that made it incompatible with certain open-source licenses and created uncertainty for commercial applications. Apache 2.0, by contrast, is one of the most widely used open-source licenses in the software industry. It explicitly permits commercial use, modification, distribution, and patent use, with minimal restrictions.
The change signals that Google is serious about positioning Gemma as a genuinely open alternative to models like Llama, Mistral, and Qwen. For enterprises, the Apache 2.0 license means they can integrate Gemma 4 into commercial products without worrying about licensing compliance or unexpected restrictions. For the research community, it means Gemma 4 can be included in datasets, combined with other open models, and redistributed under consistent terms.
This is part of a broader trend in the open-source AI space. As models become more capable and more widely deployed, the question of what “open” really means has become increasingly important. Organizations like the Open Source Initiative have begun defining standards for open source AI, and licenses like Apache 2.0 provide a clear, well-understood framework that aligns with those definitions.
What This Means for the Open-Source AI Ecosystem
Gemma 4’s move to Apache 2.0 is likely to accelerate several trends. First, it will increase competition among open-weight model providers. With Gemma 4 now truly open, developers who previously chose Llama or Mistral for licensing reasons may reconsider. Second, it will drive innovation in fine-tuning and domain adaptation, as the permissive license makes it easier for companies to create specialized versions of Gemma 4 for healthcare, legal, finance, and other regulated industries.
Third, it positions Google more credibly as an open-source contributor rather than just an open-source consumer. Google has contributed to projects like TensorFlow, Kubernetes, and Chromium, but in the AI space, its reputation has been mixed — many of its most powerful models (Gemini, PaLM) remain proprietary. Gemma 4 under Apache 2.0 is a concrete step toward building goodwill in the open-source community.
Comparing Gemma 4 to the Competition
In the 2B to 31B range, Gemma 4 competes directly with models like Mistral’s Small 2, Llama 3.1 Instruct (8B and 70B), and Qwen 2.5. The 256K context window gives Gemma 4 an edge in tasks that require processing long documents or codebases. Its function-calling capabilities make it well-suited for building AI agents and tool-augmented applications.
On multilingual benchmarks, Gemma 4’s 140+ language support puts it ahead of most competitors in this size class. The built-from-Gemini-3-research lineage also suggests strong reasoning capabilities, though the final judgment will come as the community benchmarks it against established models on real-world tasks.
Looking Forward
Gemma 4 represents the most serious commitment Google has made to the open-source AI ecosystem. The combination of strong performance, a permissive license, multiple model sizes, and extensive language support makes it a compelling choice for a wide range of applications. Whether it can dethrone Llama as the default choice for open-weight model development remains to be seen, but it has certainly raised the bar for what an open-source AI model can be.
For developers and enterprises evaluating open AI models, Gemma 4 deserves serious consideration. The licensing clarity alone removes a significant source of risk for commercial deployments, and the performance benchmarks suggest you’re not trading capability for openness.