May 6, 2025

Modular Platform 25.3: 450K+ Lines of Open Source Code and pip Packaging

Modular Team

Today we’re excited to announce Modular Platform 25.3, a significant advancement for MAX and Mojo as an integrated suite of AI libraries and tools designed to unify the AI deployment workflow. With the extensive expansion of our open source libraries and a new unified pip package, Modular Platform 25.3 makes high-performance AI more accessible and community-driven.

🔓 Open sourcing kernels, the Mojo standard library, and serving APIs

Most notably in Modular 25.3, we’re releasing the MAX AI kernels and the full Mojo standard library under the Apache 2.0 License (with LLVM exceptions). These libraries include thousands of lines of high-performance, hardware-optimized Mojo code, including production-grade kernel implementations for various CPUs and GPUs, including NVIDIA’s T4, A10G, L40, RTX 40 series, Jetson Orin Nano, A100, H100, and more. Our quantization schemes include Q4_K, Q4_0, Q6_K, GPTQ, and FP8, offering cost-effective performance for demanding workloads.

Additionally, we've open sourced the MAX serving library, our inference server that supports OpenAI-compatible endpoints and enables efficient LLM serving at scale. Together, these releases form an open, extensible AI inference stack free from proprietary GPU dependencies.

By making this new code public we’ve now collectively open sourced more than 450K lines of code from almost 6500 contributions, providing developers with production-grade reference implementations and tools to extend Modular Platform with new algorithms, operations, and hardware targets. With so much novel high-performance code, you can fine-tune your LLMs to “vibe code” with Mojo and harness the full power of modern AI hardware. We’ve found Claude Code to be particularly powerful for writing Mojo when given all this context!

We believe this is the single largest open sourcing of CPU and GPU kernels ever! To celebrate this release, Modular is hosting a community hackathon with AGI House and Crusoe GPU Cloud on May 10 at AGI House in Hillsborough, focusing on programming next-generation GPU kernels with Mojo.

🐍 Simplified installation with pip and Colab support

We’re also releasing a significant new improvement to Modular Platform: pip-based packaging. With a simple pip install modular , you gain immediate access to Mojo, our high-performance CPU and GPU programming language, and MAX, our fast AI serving framework. This pip packaging deepens our integration with the Python ecosystem, making it even easier to get started using Mojo and MAX for your critical AI workloads.

Making Modular available in PyPI has been no easy feat, and today we are beyond excited to have native support for pip. As one of only 100 companies with an Enterprise PyPI account, we’re supporting the Python developer ecosystem with a direct financial contribution and are committed to maintaining the highest standards for package quality, security, and documentation.

The release of pip install modular also unlocks an exciting new capability: running MAX models and graphs in Google Colab. The modular stable package supports running full LLMs in Google Colab Pro using A100 or L4 GPU instances. We also provide introductory support in our latest nightly build for GPU programming with MAX graphs on the free tier of Colab using T4 GPUs. Learn more about Colab support in the Modular forum.

The Modular pip packages are available today! Download it now, and be sure to share all the incredible code you develop in the Modular community forum.

📓 An updated usage license for a new era

To make our technology more accessible, we've simplified the community license for Mojo and MAX based on user feedback. Our straightforward tiered structure gives everyone freedom to use both Mojo and MAX with minimal restrictions. Watch our Community Event on the license update here.

For non‑production‑commercial use, everything is free. Use Mojo and MAX on any device, for any research, hobby or learning project. For production and commercial use, both Mojo and MAX remain free on CPUs and NVIDIA GPUs—we simply ask that you share your success story with us. For commercial deployment on non-NVIDIA accelerators, we provide free access for up to eight devices, with enterprise options available beyond that threshold. Extended use cases on other platforms require agreements with platform vendors on the best way to distribute MAX.

This update reflects our commitment to building in the open, lowering barriers to entry, and putting the community first as we enter a new era of "Build with Modular." Full details of our simplified license are available at modular.com/pricing and modular.com/legal/community.

🚀 Get started today & join us in person!

Ready to explore what Modular can do for your AI projects?

We can't wait to see what you'll build with Modular platform!

We invite you to join us on May 10 at AGI House in Hillsborough for the “Modular GPU Kernel Hackathon: Hands-on with Mojo,” where you'll get direct experience using Mojo for kernel development. Discover how to write cleaner, faster, and more portable code for the latest GPUs that can transform your AI systems programming.

For our open source releases: developers can contribute to the Mojo standard library today, and we’ll enable community contributions to the rest of the MAX libraries soon. We’re building the review and testing infrastructure to support upstream contributions to the kernel library, and are eager to welcome your contributions.

We’re excited to be at the cutting edge of kernel innovation and to lead the open movement that’s redefining what's possible in high-performance computing. This release reflects our commitment towards building a more open AI ecosystem. We invite you to come and build with us!

Modular Team
,
Company

Modular Team

Company

Our mission is to have real, positive impact in the world by reinventing the way AI technology is developed and deployed into production with a next-generation developer platform.