
Announcing stack-pr: an open source tool for managing stacked PRs on GitHub
We are pleased to announce the release of a new tool aimed at simplifying the management of stacked pull requests (PRs) on GitHub - stack-pr. This tool is still in its early development days, but we are excited to share it with the community and welcome your contributions.

Debugging in Mojo🔥
Developer tooling is a big priority for Mojo and MAX, we want to vastly improve the debugging experience compared to the traditional Python, C++, and CUDA stack. Machine learning often requires inspecting the state of a program after a long running process, requiring more control than what "print debugging" gives you. Over time this tooling will extend to GPUs, allowing you to step through CPU code into GPU calls with the same developer experience.

A brief guide to the Mojo n-body example
Since August 2023, the Mojo repository has included a small benchmark example titled nbody.mojo. This code is based on an example from The Computer Language Benchmarks Game, a site that benchmarks implementations of different algorithms in popular programming languages.

What's new in MAX 24.4? MAX on macOS, fast local Llama3, native quantization and GGUF support
In our recent MAX 24.4 release, we announced the availability of MAX on MacOS and MAX Pipelines with native support for local Generative AI models such as Llama3. Together, these innovations establish a new industry standard paradigm, enabling developers to leverage a single toolchain to build Generative AI pipelines locally and seamlessly deploy them to the cloud, all with industry-leading performance.

What’s new in Mojo 24.4? Improved collections, new traits, os module features and core language enhancements
Mojo 24.4 is now available for download, and this release includes several core language and standard library enhancements. In this blog post, we’ll dive deep into many of these features using code examples. One of the biggest highlights of this release is that we received 214 pull requests from 18 community contributors for new product features, bug fixes, documentation enhancements, and code refactoring. These contributions resulted in 30 net new features in the standard library, accounting for 11% of all improvements in this release. We’re incredibly proud of the momentum we’re seeing with community contributions, and it goes without saying – you are the real star of this release. On behalf of the entire Mojo team, we’d like to thank you for all your contributions to making Mojo awesome!

Deep dive into ownership in Mojo
This post blog is the second part of the series of ownership in Mojo. Please make sure to check out the first part, What Ownership is Really About: A Mental Model Approach, as we will build on concepts developed there. This post serves as accompanying material for the deep dive on ownership by our CEO, Chris Lattner. Be sure to watch the video as well, which covers how ownership is implemented in Mojo's compiler, providing further insights and technical details.

What ownership is really about: a mental model approach
Ownership is a well-known concept in modern programming languages such as Mojo that aims to provide a safe programming model for memory management while ensuring high performance. This allows programmers to build safe abstractions without the need to manually manage memory, making development more efficient and less error-prone.

Fast⚡k-means clustering in Mojo🔥: a guide to porting Python to Mojo🔥 for accelerated k-means clustering
There are several clustering algorithms, but k-means — the algorithm we're going to implement from scratch in Python and Mojo🔥 in this blog post — is one of the most popular due to its simplicity and ease of implementation.
Easy ways to get started
Get started guide
With just a few commands, you can install MAX as a conda package and deploy a GenAI model on a local endpoint.
400+ open source models
Follow step by step recipes to build Agents, chatbots, and more with MAX.
Browse Examples
Follow step by step recipes to build Agents, chatbots, and more with MAX.