Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Democratizing AI Compute Series

Go behind the scenes of the AI industry with Chris Lattner

🚨

News

Product

The path to Mojo 1.0

While we are excited about this milestone, this of course won’t be the end of Mojo development! Some commonly requested capabilities for more general systems programming won’t be completed for 1.0, such as a robust async programming model and support for private members. Read below for more information on that!

December 5, 2025

/

Modular Team

,  

🚨

News

Product

Modular 25.7: Faster Inference, Safer GPU Programming, and a More Unified Developer Experience

Today, we’re excited to release Modular Platform 25.7, an update that deepens our vision of a unified, high-performance compute layer for AI. With a fully open MAX Python API, an experimental next-generation modeling API, expanded hardware support for NVIDIA Grace superchips, and a safer, more capable Mojo GPU programming experience, this release moves us closer to an ecosystem where developers spend less time fighting infrastructure and more time advancing what AI can do.

November 20, 2025

/

Modular Team

,  

🚨

News

Product

Modular 25.6: Unifying the latest GPUs from NVIDIA, AMD, and Apple

We’re excited to announce Modular Platform 25.6 – a major milestone in our mission to build AI’s unified compute layer. With 25.6, we’re delivering the clearest proof yet of our mission: a unified compute layer that spans from laptops to the world’s most powerful datacenter GPUs. The platform now delivers:

September 22, 2025

/

Modular Team

,  

🚨

News

Product

Modular Platform 25.5: Introducing Large Scale Batch Inference

Modular Platform 25.5 is here, and introduces Large Scale Batch Inference: a highly asynchronous, at-scale batch API built on open standards and powered by Mammoth. We're launching this new capability through our partner SF Compute, enabling high-volume AI performance with a fast, accurate, and efficient platform that seamlessly scales workloads across any hardware.

August 5, 2025

/

Modular Team

,  

🚨

News

Product

AI Agents for AWS Marketplace

Modular Inc. announces MAX High-Performance GenAI Serving and MAX Code Repo Agent now available in AWS Marketplace's new AI Agents and Tools category, delivering 10x performance improvements and streamlined AI deployment for enterprises.

July 16, 2025

/

Modular Team

,  

🚨

News

Product

Modular 25.4: One Container, AMD and NVIDIA GPUs, No Lock-In

We're excited to announce Modular Platform 25.4, a major release that brings the full power of AMD GPUs to our entire platform. This release marks a major leap toward democratizing access to high-performance AI by enabling seamless portability to AMD GPUs.

June 18, 2025

/

Modular Team

,  

🚨

News

Product

Introducing Mammoth: Enterprise-Scale GenAI Deployments Made Simple

Introducing Mammoth, a distributed AI serving tool built specifically for the realities of enterprise AI deployment.

June 10, 2025

/

Modular Team

,  

🚨

News

Product

Modular Platform 25.3: 450K+ Lines of Open Source Code and pip Packaging

Announcing Modular Platform 25.3: our largest open source release, with 450k+ lines of high-performance AI kernels, plus pip install modular.

May 6, 2025

/

Modular Team

,  

🚨

News

Product

A New, Simpler License for MAX and Mojo

New licensing terms for MAX and Mojo that allows for unlimited non-commercial usage

April 23, 2025

/

Modular Team

,  

🚨

News

Product

MAX 25.2: Unleash the power of your H200's–without CUDA!

We’re excited to announce MAX 25.2, a major update that unlocks industry-leading performance on the largest language models–built from the ground up without CUDA.

March 25, 2025

/

Modular Team

,  

No items found within this category

We couldn’t find anything. Try changing or resetting your filters.

Build the future of AI with Modular

View Editions
  • Get started guide

    Install MAX with a few commands and deploy a GenAI model locally.

    Read Guide
  • Browse open models

    500+ models, many optimized for lightning-fast performance

    Browse models