Google has just launched a new AI model called Gemma 3, which is capable of running on both smartphones and high-performance workstations.
More than a year after launching two versions of its artificial intelligence (AI) model Gemma,
Google Launches High-Performance AI Model Gemma 3 |
has just continued to release a new upgrade of this AI model with version Gemma 3.
Gemma 3 comes in four variants with 1 billion, 4 billion, 12 billion, and 27 billion parameters. According to Google, this is the best single-speed model in the world, able to run on a single GPU or TPU instead of requiring a large cluster of computers.
In theory, this would allow Google's Gemma 3 to run directly on the Pixel phone's Tensor Processing Unit (TPU), similar to how the Gemini Nano model runs locally on a mobile device.
Compared to the Gemini AI model line, Gemma 3's biggest advantage is its open source nature, which makes it easy for developers to customize, package, and deploy on-demand in mobile applications and desktop software. In addition, Gemma also supports more than 140 languages, of which 35 are already available as training packages.
Comparing the performance of Gemma 3 (27 billion parameter version) with other AI models |
In terms of performance, Google confidently claims that Gemma 3 outperforms many other popular open-source AI models, including DeepSeek V3, OpenAI o3-mini, or Meta's Llama-405B variant.
Google's Gemma 3 AI model also allows processing of text, images, videos... to provide on-demand responses, including generating image and video content as described by users.
Comparing the performance of Gemma 3 (27 billion parameter version) with other AI models |
Google’s latest open-source AI models can be deployed locally or through the company’s cloud services, such as Vertex AI. Gemma 3 is now available on Google AI Studio, as well as third-party platforms like Hugging Face, Ollama, and Kaggle.
Google's third-generation open source model is part of an industry trend where companies develop both large language models (LLMs) and small language models (SLMs) in parallel. Google's rival Microsoft is also pursuing a similar strategy with its open source small language model Phi.
Small language models like Gemma and Phi are highly efficient in terms of resource usage, making them ideal for running on devices like smartphones. They are also particularly well-suited for mobile applications due to their lower latency.
Source
Comment (0)