
Modverse #50: Modular Platform 25.5, Community Meetups, and Mojo's Debut in the Stack Overflow Developer Survey
This past month brought a wave of community projects and milestones across the Modular ecosystem!Modular Platform 25.5 landed with Large Scale Batch Inference, leaner packages, and new integrations that make scaling AI easier than ever. It’s already powering production deployments like SF Compute’s Large Scale Inference Batch API, cutting costs by up to 80% while supporting more than 15 leading models.

Modverse #49: Modular Platform 25.4, Modular 🤝 AMD, and Modular Hack Weekend
Between a global hackathon, a major release, and standout community projects, last month was full of progress across the Modular ecosystem!Modular Platform 25.4 launched on June 18th, alongside the announcement of our official partnership with AMD, bringing full support for AMD Instinct™ MI300X and MI325X GPUs. You can now deploy the same container across both AMD and NVIDIA hardware with no code changes, no vendor lock-in, and no additional configuration!

Modverse #48: Modular Platform 25.3, MAX AI Kernels, and the Modular GPU Kernel Hackathon
May has been a whirlwind of major open source releases, packed in-person events, and deep technical content!We kicked it off with the release of Modular Platform 25.3 on May 6th, a major milestone in open source AI. This drop included more than 450k lines of Mojo and MAX code, featuring the full Mojo standard library, the MAX AI Kernels, and the MAX serving library. It’s all open source, and you can install it in seconds with pip install modular, whether you’re working locally or in Colab with A100 or L4 GPUs.

Modverse #47: MAX 25.2 and an evening of GPU programming at Modular HQ
MAX 25.2 is turning heads — and for good reason. This powerful update delivers industry-leading performance for large language models on NVIDIA GPUs, all without CUDA. MAX 25.2 builds on the momentum of 25.1 and introduces major upgrades to help you build GenAI systems that are faster, leaner, and easier to scale.

Modverse #46: MAX 25.1, MAX Builds, and Democratizing AI Compute
We recently introduced MAX 25.1, a major leap forward in AI development. This release enhances agentic and LLM workflows, introduces MAX Builds as a central hub for GenAI models and application recipes, and debuts a new GPU programming interface. Developers can now take advantage of GPU-accelerated embeddings, OpenAI-compatible function calling, structured output generation, and high-performance LLM optimizations like paged attention and prefix caching for improved efficiency.

MAX is here! What does that mean for Mojo🔥?
When we started Modular, building a programming language wasn't our goal, it ended up being a solution to a set of problems. Specifically, as we were building our platform to unify the world’s ML/AI infrastructure, we realized that programming across the entire stack was too complicated.
Easy ways to get started
Get started guide
With just a few commands, you can install MAX as a conda package and deploy a GenAI model on a local endpoint.
400+ open source models
Follow step by step recipes to build Agents, chatbots, and more with MAX.
Browse Examples
Follow step by step recipes to build Agents, chatbots, and more with MAX.