Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Democratizing AI Compute Series

Go behind the scenes of the AI industry with Chris Lattner

Latest

🚨

NEW

Industry

Why do HW companies struggle to build AI software? (Democratizing AI Compute, Part 9)

April 22, 2025

/

Chris Lattner

Read

🚨

NEW

Community

Modverse #47: MAX 25.2 and an evening of GPU programming at Modular HQ

April 17, 2025

/

Caroline Frasca

Read

🚨

NEW

Industry

What about the MLIR compiler infrastructure? (Democratizing AI Compute, Part 8)

April 8, 2025

/

Chris Lattner

Read

🚨

NEW

Industry

What about Triton and Python eDSLs? (Democratizing AI Compute, Part 7)

In this post, we’ll break down how Python eDSLs work, their strengths and weaknesses, and take a close look at Triton.

March 26, 2025

/

Chris Lattner

Read

🚨

NEW

Product

MAX 25.2: Unleash the power of your H200's–without CUDA!

We’re excited to announce MAX 25.2, a major update that unlocks industry-leading performance on the largest language models–built from the ground up without CUDA.

March 25, 2025

/

Modular Team

Read

🚨

NEW

Industry

What about TVM, XLA, and AI compilers? (Democratizing AI Compute, Part 6)

March 12, 2025

/

Chris Lattner

Read

🚨

NEW

Industry

What about OpenCL and CUDA C++ alternatives? (Democratizing AI Compute, Part 5)

March 5, 2025

/

Chris Lattner

Read

🚨

NEW

Community

Modverse #46: MAX 25.1, MAX Builds, and Democratizing AI Compute

Welcome to Modverse #46, covering blogs, videos, tutorials, community projects, MAX, and Mojo!

February 27, 2025

/

Caroline Frasca

Read

🚨

NEW

Industry

CUDA is the incumbent, but is it any good? (Democratizing AI Compute, Part 4)

Answering the question of whether CUDA is “good” is much trickier than it sounds.

February 20, 2025

/

Chris Lattner

Read

🚨

NEW

Product

MAX 25.1 - Introducing MAX Builds

February 18, 2025

/

Modular Team

Read