
Modverse #49: Modular Platform 25.4, Modular 🤝 AMD, and Modular Hack Weekend
Between a global hackathon, a major release, and standout community projects, last month was full of progress across the Modular ecosystem!Modular Platform 25.4 launched on June 18th, alongside the announcement of our official partnership with AMD, bringing full support for AMD Instinct™ MI300X and MI325X GPUs. You can now deploy the same container across both AMD and NVIDIA hardware with no code changes, no vendor lock-in, and no additional configuration!

How is Modular Democratizing AI Compute? (Democratizing AI Compute, Part 11)
Given time, budget, and expertise from a team of veterans who’ve built this stack before, Modular set out to solve one of the defining challenges of our era: how to Democratize AI Compute. But what does that really mean—and how does it all add up?

Modular 25.4: One Container, AMD and NVIDIA GPUs, No Lock-In
We're excited to announce Modular Platform 25.4, a major release that brings the full power of AMD GPUs to our entire platform. This release marks a major leap toward democratizing access to high-performance AI by enabling seamless portability to AMD GPUs.

Modular + AMD: Unleashing AI performance on AMD GPUs
Modular is excited to announce a partnership with Advanced Micro Devices, Inc. (AMD), one of the world’s leading AI semiconductor companies. This partnership marks the general availability of the Modular Platform across AMD's GPU portfolio, a significant milestone in heterogeneous AI computing infrastructure. Effective immediately, developers can deploy the Modular Platform on AMD's flagship datacenter accelerators, including the MI300 and MI325 series.

Modverse #48: Modular Platform 25.3, MAX AI Kernels, and the Modular GPU Kernel Hackathon
May has been a whirlwind of major open source releases, packed in-person events, and deep technical content!We kicked it off with the release of Modular Platform 25.3 on May 6th, a major milestone in open source AI. This drop included more than 450k lines of Mojo and MAX code, featuring the full Mojo standard library, the MAX AI Kernels, and the MAX serving library. It’s all open source, and you can install it in seconds with pip install modular, whether you’re working locally or in Colab with A100 or L4 GPUs.
Start building with Modular
Quick start resources
Get started guide
With just a few commands, you can install MAX as a conda package and deploy a GenAI model on a local endpoint.
Browse open source models
500+ supported models, most of which have been optimized for lightning fast speed on the Modular platform.
Find examples
Follow step by step recipes to build Agents, chatbots, and more with MAX.