Mojo 🔥 — the programming language for all AI developers

Mojo combines the usability of Python with the performance of C, unlocking unparalleled programmability of AI hardware and extensibility of AI models.

Available on Mac 🍎, Linux, and Windows (WSL)

Join the Modverse

175K+ Developers

50K+ Enterprises

22K+ Members

17K+ stars

9K+ subscribers

over 300 open-source projects created by the community

Llama2.mojo

Inference forLlama2 models in a single file of Mojo 🔥

Voodoo

A machine learning framework in pure Mojo 🔥

Lightbug HTTP

Simple and fast HTTP framework for Mojo 🔥

Mojo Out-performs

Write everything in one language

Write Python or scale all the way down to the metal. Program the multitude of low-level AI hardware. No C++ or CUDA required.

Take a tour of Mojo
def sort(v: ArraySlice[Int]):
  for i in range(len(v)):
    for j in range(len(v) - i - 1):
      if v[j] > v[j + 1]:
        swap(v[j], v[j + 1])
struct MyPair:
  var first: Int
  var second: F32
  
  def __init__(self, first: Int, second: F32):
    self.first = first
    self.second = second
def reorder_and_process(owned x: HugeArray):
  sort(x)	# Update in place
  
  give_away(x^)	# Transfer ownership
  
  print(x[0])	# Error: ‘x’ moved away!
def exp[dt: DType, elts: Int]
    (x: SIMD[dt, elts]) -> SIMD[dt, elts]:
  x = clamp(x, -88.3762626647, 88.37626266)
  k = floor(x * INV_LN2 + 0.5)
  r = k * NEG_LN2 + x
  return ldexp(_exp_taylor(r), k)
def exp_buffer[dt: DType](data: ArraySlice[dt]):

  # Search for the best vector length
  alias vector_len = autotune(1, 4, 8, 16, 32)
  
  # Use it as the vectorization length
  vectorize[exp[dt, vector_len]](data)

The full power of MLIR

Parallel heterogenous runtime

Fast compile times

def sort(v: ArraySlice[Int]):
  for i in range(len(v)):
    for j in range(len(v) - i - 1):
      if v[j] > v[j + 1]:
        swap(v[j], v[j + 1])
struct MyPair:
  var first: Int
  var second: F32
  
  def __init__(self, first: Int, second: F32):
    self.first = first
    self.second = second
def reorder_and_process(owned x: HugeArray):
  sort(x)	# Update in place
  
  give_away(x^)	# Transfer ownership
  
  print(x[0])	# Error: ‘x’ moved away!
def exp[dt: DType, elts: Int]
    (x: SIMD[dt, elts]) -> SIMD[dt, elts]:
  x = clamp(x, -88.3762626647, 88.37626266)
  k = floor(x * INV_LN2 + 0.5)
  r = k * NEG_LN2 + x
  return ldexp(_exp_taylor(r), k)
def exp_buffer[dt: DType](data: ArraySlice[dt]):

  # Search for the best vector length
  alias vector_len = autotune(1, 4, 8, 16, 32)
  
  # Use it as the vectorization length
  vectorize[exp[dt, vector_len]](data)

Unlock Python performance

Utilize the full power of the hardware, including multiple cores, vector units, and exotic accelerator units, with the world's most advanced compiler and heterogenous runtime. Achieve performance on par with C++ and CUDA without the complexity.

Mojo leverages MLIR, which enables Mojo developers to take advantage of vectors, threads, and AI hardware units.

Single-threaded execution

Parallel processing across multiple cores

970 s
1x
171 s
6x
0.11 s
9000x
0.0142 s
68000x

Access the entire Python ecosystem

Experience true interoperability with the Python ecosystem. Seamlessly intermix arbitrary libraries like Numpy and Matplotlib and your custom code with Mojo.

Read the programming manual
MAKE_PLOT.🔥
def make_plot(m: Matrix):
  plt = Python.import_module("matplotlib.pyplot")
  fig = plt.figure(1, [10, 10 * yn // xn], 64)
  ax = fig.add_axes([0.0, 0.0, 1.0, 1.0], False, 1)
  plt.imshow(image)
  plt.show()

make_plot(compute_mandelbrot())
Mojo 🔥

Upgrade your models and the Modular stack

Easily extend your models with pre and post-processing operations, or replace operations with custom ones. Take advantage of kernel fusion, graph rewrites, shape functions, and more.

Mojo can upgrade the existing operations in your model.

Mojo 🔥 works with 
all the rest of the suite

Modular MAX Engine can be used in combination with our integrations via MAX Serving services and it is powered by Mojo 🔥 the fastest and most portable programming language for your AI applications.

Our engine integrates with the rest of our suite of MAX products, while being usable on its own.

Download Mojo 🔥 and try it right now

Mojo is still a work in progress, but it's available to try today via our Mojo SDK. Run through tutorials and write your own Mojo code.

Download now
Mojo 🔥

01.

Get the Mojo 🔥 SDK today and get started with our example code on GitHub.

02.

Our docs will help you quickly discover why Mojo is such a powerful extension to Python, and the future of AI programming.

03.

Come and chat with us on our Discord, and help shape the future of the language as we continue to develop it.

Ready to play with Mojo?

Sign up & download the Mojo SDK right now.

Read the Mojo docs