
What’s new in Mojo SDK v0.5?
Mojo SDK v0.5 is now available for download and includes exciting new features. In this blog post, I’ll discuss what these features are and how to use them with code examples. ICYMI, in last week’s Modular community livestream, we dove deep into all things Mojo SDK v0.5 with live demos of the examples shared in this blog post, while answering your questions live!
.png)
Using Mojo🔥 with Python🐍
Mojo allows you to access the entire Python ecosystem, but environments can vary depending on how Python was installed. It's worth taking some time to understand exactly how modules and packages work in Python, as there are a few complications to be aware of. If you've had trouble calling into Python code before, this will help you get started.

How to setup a Mojo🔥 development environment with Docker containers
How do you guarantee that your software is portable, runs reliably, and scales easily in production environments? The short answer is: Use containers. Container technologies like Docker and Kubernetes are popular tools for building and deploying software applications, but until recently they were considered exotic infrastructure for IT/Ops experts.

An easy introduction to Mojo🔥 for Python programmers
Learning a new programming language is hard. You have to learn new syntax, keywords, and best practices, all of which can be frustrating when you’re just starting. In this blog post, I want to share a gentle introduction to Mojo from a Python programmer’s perspective.

Modular natively supports dynamic shapes for AI workloads
Today’s AI infrastructure is difficult to evaluate - so many converge on simple and quantifiable metrics like QPS, Latency and Throughput. This is one reason why today’s AI industry is rife with bespoke tools that provide high performance on benchmarks but have significant usability challenges in real-world AI deployment scenarios.
.jpeg)
AI’s compute fragmentation: what matrix multiplication teaches us
AI is powered by a virtuous circle of data, algorithms (“models”), and compute. Growth in one pushes needs in the others and can grossly affect the developer experience on aspects like usability and performance. Today, we have more data and more AI model research than ever before, but compute isn’t scaling at the same speed due to … well, physics.

If AI serving tech can’t solve today’s problems, how do we scale into the future?
The technological progress that has been made in AI over the last ten years is breathtaking — from AlexNet in 2012 to the recent release of ChatGPT, which has taken large foundational models and conversational AI to another level.

Part 2: Increasing development velocity of giant AI models
The first four requirements address one fundamental problem with how we've been using MLIR: weights are constant data, but shouldn't be managed like other MLIR attributes. Until now, we've been trying to place a square peg into a round hole, creating a lot of wasted space that's costing us development velocity (and, therefore, money for users of the tools).
Easy ways to get started
Get started guide
With just a few commands, you can install MAX as a conda package and deploy a GenAI model on a local endpoint.
400+ open source models
Follow step by step recipes to build Agents, chatbots, and more with MAX.
Browse Examples
Follow step by step recipes to build Agents, chatbots, and more with MAX.