Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Illustration of a smiling astronaut and a cheerful orange flame character floating in front of a neon-lit triangular background.

Democratizing AI Compute Series

Go behind the scenes of the AI industry with Chris Lattner

🚨

News

Product

Modular 26.1: A Big Step Towards More Programmable and Portable AI Infrastructure

Today we’re releasing Modular 26.1, a major step toward making high-performance AI computing easier to build, debug, and deploy across heterogeneous hardware. This release is focused squarely on developer velocity and programmability—helping advanced AI teams reduce time to market for their most important innovations.

January 29, 2026

/

Modular Team

,  

🚨

News

Product

The path to Mojo 1.0

While we are excited about this milestone, this of course won’t be the end of Mojo development! Some commonly requested capabilities for more general systems programming won’t be completed for 1.0, such as a robust async programming model and support for private members. Read below for more information on that!

December 5, 2025

/

Modular Team

,  

🚨

News

Product

Modular 25.7: Faster Inference, Safer GPU Programming, and a More Unified Developer Experience

Today, we’re excited to release Modular Platform 25.7, an update that deepens our vision of a unified, high-performance compute layer for AI. With a fully open MAX Python API, an experimental next-generation modeling API, expanded hardware support for NVIDIA Grace superchips, and a safer, more capable Mojo GPU programming experience, this release moves us closer to an ecosystem where developers spend less time fighting infrastructure and more time advancing what AI can do.

November 20, 2025

/

Modular Team

,  

🚨

News

Product

Modular 25.6: Unifying the latest GPUs from NVIDIA, AMD, and Apple

We’re excited to announce Modular Platform 25.6 – a major milestone in our mission to build AI’s unified compute layer. With 25.6, we’re delivering the clearest proof yet of our mission: a unified compute layer that spans from laptops to the world’s most powerful datacenter GPUs. The platform now delivers:

September 22, 2025

/

Modular Team

,  

🚨

News

Product

Modular Platform 25.5: Introducing Large Scale Batch Inference

Modular Platform 25.5 is here, and introduces Large Scale Batch Inference: a highly asynchronous, at-scale batch API built on open standards and powered by Mammoth. We're launching this new capability through our partner SF Compute, enabling high-volume AI performance with a fast, accurate, and efficient platform that seamlessly scales workloads across any hardware.

August 5, 2025

/

Modular Team

,  

🚨

News

Product

AI Agents for AWS Marketplace

Modular Inc. announces MAX High-Performance GenAI Serving and MAX Code Repo Agent now available in AWS Marketplace's new AI Agents and Tools category, delivering 10x performance improvements and streamlined AI deployment for enterprises.

July 16, 2025

/

Modular Team

,  

🚨

News

Product

Modular 25.4: One Container, AMD and NVIDIA GPUs, No Lock-In

We're excited to announce Modular Platform 25.4, a major release that brings the full power of AMD GPUs to our entire platform. This release marks a major leap toward democratizing access to high-performance AI by enabling seamless portability to AMD GPUs.

June 18, 2025

/

Modular Team

,  

🚨

News

Product

Introducing Mammoth: Enterprise-Scale GenAI Deployments Made Simple

Introducing Mammoth, a distributed AI serving tool built specifically for the realities of enterprise AI deployment.

June 10, 2025

/

Modular Team

,  

🚨

News

Product

Modular Platform 25.3: 450K+ Lines of Open Source Code and pip Packaging

Announcing Modular Platform 25.3: our largest open source release, with 450k+ lines of high-performance AI kernels, plus pip install modular.

May 6, 2025

/

Modular Team

,  

🚨

News

Product

A New, Simpler License for MAX and Mojo

New licensing terms for MAX and Mojo that allows for unlimited non-commercial usage

April 23, 2025

/

Modular Team

,  

  • Series

    Democratizing Compute Series

    Go behind the scenes of the AI industry in this blog series by Chris Lattner. Trace the evolution of AI compute, dissect its current challenges, and discover how Modular is raising the bar with the world’s most open inference stack.

    11 part series

  • Series

    Matrix Multiplication on Blackwell

    Learn how to write a high-performance GPU kernel on Blackwell that offers performance competitive to that of NVIDIA's cuBLAS implementation while leveraging Mojo's special features to make the kernel as simple as possible.

    4 part series

No items found within this category

We couldn’t find anything. Try changing or resetting your filters.

Build the future of AI with Modular

View Editions
  • Person with blonde hair using a laptop with an Apple logo.

    Get started guide

    Install MAX with a few commands and deploy a GenAI model locally.

    Read Guide
  • Magnifying glass emoji with black handle and round clear lens.

    Browse open models

    500+ models, many optimized for lightning-fast performance

    Browse models