Blog

Democratizing AI Compute Series
Go behind the scenes of the AI industry with Chris Lattner
Latest
.jpg)
Accelerating AI model serving with the Modular AI Engine
A few weeks ago, we announced the world’s fastest unified AI inference engine. The Modular AI Engine provides significant usability, portability, and performance gains for the leading AI frameworks — PyTorch and TensorFlow — and delivers world-leading execution performance for all cloud-available CPU architectures.

Our launch & what's next
Last week, we launched Modular to the world after more than 16 months in stealth. We started Modular with a deep conviction — after 6+ years of building and scaling AI infrastructure to billions of users and 20+ years of building foundational compute infrastructure — it was clear the world needed a better path forward. Everyone wants less complexity, better access to compute and hardware, and the ability to develop and deploy AI faster.
.jpeg)
AI’s compute fragmentation: what matrix multiplication teaches us
AI is powered by a virtuous circle of data, algorithms (“models”), and compute. Growth in one pushes needs in the others and can grossly affect the developer experience on aspects like usability and performance. Today, we have more data and more AI model research than ever before, but compute isn’t scaling at the same speed due to … well, physics.

We want to hear from you
At Modular, we are rebuilding AI infrastructure for the world. Our goal is to move past AI tools that are themselves research projects and into a future where AI development and deployment are orders of magnitude more efficient for everyone. You should be able to do this without trading off performance or having to rewrite your entire code base.

If AI serving tech can’t solve today’s problems, how do we scale into the future?
The technological progress that has been made in AI over the last ten years is breathtaking — from AlexNet in 2012 to the recent release of ChatGPT, which has taken large foundational models and conversational AI to another level.

Part 2: Increasing development velocity of giant AI models
The first four requirements address one fundamental problem with how we've been using MLIR: weights are constant data, but shouldn't be managed like other MLIR attributes. Until now, we've been trying to place a square peg into a round hole, creating a lot of wasted space that's costing us development velocity (and, therefore, money for users of the tools).

Modular is rebuilding AI in the face of a new economy
Here in November 2022, we see a continuing onslaught of bad news: significant layoffs of incredible people as companies tighten their belts; companies that raised too much money, too fast, without core fundamentals are dying; and a changing climate where over-tightening rather than under-tightening is seemingly the new normal.
Start building with Modular
Quick start resources
Get started guide
With just a few commands, you can install MAX as a conda package and deploy a GenAI model on a local endpoint.
Browse open source models
500+ supported models, most of which have been optimized for lightning fast speed on the Modular platform.
Find examples
Follow step by step recipes to build Agents, chatbots, and more with MAX.