Blog

Democratizing AI Compute Series
Go behind the scenes of the AI industry with Chris Lattner
Modverse #54: From GTC to Edinburgh, a Community Building Momentum
This edition covers one of the busiest stretches in Modular's recent history: four days at GTC, a new office on another continent, fresh community builds, and a release that expands what MAX and Mojo🔥 can do. Here's everything that's been happening across the ecosystem.

Modular + AMD: Unleashing AI performance on AMD GPUs
Modular is excited to announce a partnership with Advanced Micro Devices, Inc. (AMD), one of the world’s leading AI semiconductor companies. This partnership marks the general availability of the Modular Platform across AMD's GPU portfolio, a significant milestone in heterogeneous AI computing infrastructure. Effective immediately, developers can deploy the Modular Platform on AMD's flagship datacenter accelerators, including the MI300 and MI325 series.

Modular partners with Amazon Web Services (AWS) to bring MAX to AWS services
Today, Modular is excited to announce a partnership with Amazon Web Services (AWS), the world’s leading and largest cloud server provider. Together, we are bringing the benefits of the MAX Platform to AWS production services everywhere, powering innovative AI features for billions of users around the world.

Modular to bring NVIDIA Accelerated Computing to the MAX Platform
The era of Generative AI is upon us. Companies around the world are exploring how it can transform their businesses, yet most are finding it challenging to economically and efficiently deploy these larger and more complex models into production.

Welcome Mostafa Hagog to Modular
We are happy to welcome Mostafa Hagog to Modular, who recently joined to lead our high performance numeric kernels, graph compiler, and low level heterogeneous runtime teams! These technology areas are critical low-level components of our AI Engine, and are directly responsible for delivering state of the art performance across many categories of hardware.

Democratizing Compute
Go behind the scenes of the AI industry in this blog series by Chris Lattner. Trace the evolution of AI compute, dissect its current challenges, and discover how Modular is raising the bar with the world’s most open inference stack.

Matrix Multiplication on Blackwell
Learn how to write a high-performance GPU kernel on Blackwell that offers performance competitive to that of NVIDIA's cuBLAS implementation while leveraging Mojo's special features to make the kernel as simple as possible.

Structured Mojo Kernels
Learn how Mojo simplifies GPU programming with modular kernel architecture, compile-time abstractions, and zero-cost performance across modern GPU hardware.

Software Pipelining for GPU Kernels
Explore software pipelining for GPU kernels from first principles. We formalize dependencies as a graph, solve for the optimal schedule with a constraint solver, and show how it all integrates into MAX via pure Mojo.
No items found within this category
We couldn’t find anything. Try changing or resetting your filters.

Sign up today
Signup to our Cloud Platform today to get started easily.
Sign Up
Browse open models
Browse our model catalog, or deploy your own custom model
Browse models
.png)


