The next generation AI developer platform

The fastest unified AI inference engine in the world.
A new programming language for all AI developers.
def softmax(lst):
  norm = np.exp(lst - np.max(lst))
  return norm / norm.sum()
def softmax(lst):
  norm = np.exp(lst - np.max(lst))
  return norm / norm.sum()

struct NDArray:
  def max(self) -> NDArray:
    return self.pmap(SIMD.max)

struct SIMD[type: DType, width: Int]:
  def max(self, rhs: Self) -> Self:
    return (self >= rhs).select(self, rhs)
intel
More than
2x
Faster on FLOAT32
AMD
More than
3x
Faster on FLOAT32
Graviton
More than
4x
Faster on FLOAT32
intel
More than
2x
Faster on FLOAT32
AMD
More than
3x
Faster on FLOAT32
Graviton
More than
4x
Faster on FLOAT32
def softmax(lst):
  norm = np.exp(lst - np.max(lst))
  return norm / norm.sum()
def softmax(lst):
  norm = np.exp(lst - np.max(lst))
  return norm / norm.sum()

struct NDArray:
  def max(self) -> NDArray:
    return self.pmap(SIMD.max)

struct SIMD[type: DType, width: Int]:
  def max(self, rhs: Self) -> Self:
    return (self >= rhs).select(self, rhs)

Discover how our revolutionary infrastructure makes AI more usable, scalable, high-performant, and cost-effective.

The Modular Engine unifies AI frameworks and hardware and delivers unparalleled performance and cost savings.

Mojo combines the usability of Python, with the performance of C, unlocking unparalleled programmability of hardware.

Our unified, extensible platform superpowers your AI

Modular is an integrated, composable suite of tools that simplifies your AI infrastructure so your team can develop, deploy, and innovate faster.

Clouds
Ai Frameworks
Engine
Devices

The world’s fastest unified inference engine

Modular’s inference engine unifies AI industry frameworks and hardware, enabling you to deploy to any cloud or on-prem environment with minimal code changes – unlocking unmatched usability, performance, and portability.

Run your models
anywhere, reduce costs

Seamlessly move your workloads to the best hardware for the job without rewriting or recompiling your models. Avoid lock-in and take advantage of cloud price efficiencies and performance improvements without migration costs.

Mojo 🔥 — a new programming language for all AI developers

Mojo is a programming language that combines the usability of Python with the performance of C, unlocking unparalleled programmability of AI hardware and extensibility of AI models.

SOftmax.🔥
def softmax(lst):
  norm = np.exp(lst - np.max(lst))
  return norm / norm.sum()

struct NDArray:
  def max(self) -> NDArray:
    return self.pmap(SIMD.max)

struct SIMD[type: DType, width: Int]:
  def max(self, rhs: Self) -> Self:
    return (self >= rhs).select(self, rhs)

Deploy the largest models in the world on our stack

The Modular Compute Platform dynamically partitions models with billions of parameters and distributes their execution across multiple machines, enabling unparalleled efficiency, scale, and reliability for the largest workloads.

LLaMA
Gopher
T5
Jurassic

Get help from the people who know Modular best

As a community member, you can chat with the Modular team directly on Discord, and as an enterprise customer, you get direct support from industry experts to keep you running and enable you to scale to your next challenges.

General
Modular
Today at 4:17PM
How can we help you?

Deploy on the fastest unified infrastructure on the planet

Modular unlocks state-of-the-art latency, efficiency, and throughput, helping you productionize larger models and realize massive cost savings on your cloud bill.

125 qps
TensorFlow
17
qps
PyTorch
28
qps
Modular Engine
125
qps
* Model
DLRM RMC1
Instance
AWS c6g.4xlarge
(Graviton2)
Batch Size
1

Modular's cloud
compute platform

01.

Notebooks for training on the largest compute clusters using Python & Mojo 🔥 for highly optimized workloads.

02.

Utilize our managed environment, or Bring your own cloud (BYOC), for seamless workload management.

03.

Detailed machine performance and metrics data to provide end-to-end insight into your AI workloads.

04.

Leverage our easy-to-use web UI or CLI tooling to seamlessly manage your training and deployment workflows.

05.

Enterprise security & encryption for your data to be secured at rest and in transit on your data stores.

Why Modular?

Built by the world’s AI experts,

Our team has built most of the world’s existing AI infrastructure, including TensorFlow, PyTorch, TPUs, and MLIR, and launched software like Swift and LLVM. Now we’re focused on rebuilding AI infrastructure for the world.

Reinvented from the ground up

To unlock the next wave of AI innovation, we need a “first principles” approach to the lowest layers of the AI stack. We can’t pile on more and more layers of complexity on top of already over-complicated existing solutions.

Built with generality in mind

Natively multi-model, multi-framework, multi-hardware, and multi-cloud — our infrastructure scales from the largest clusters down to the smallest edge devices and in-between.

Infrastructure that just works

We build technology that meets you where you are at. You shouldn’t have to rewrite your models or application code, grapple with confusing converters, or be a hardware expert to take advantage of state-of-the-art technology.

Built for you

Move beyond Big Tech’s trickle-down infrastructure. Get direct access to industry experts that will help solve any issue you have with our infrastructure and make sure we’re meeting your SLA/SLOs.

Ready to get started?

Sign up to gain early access to Modular’s infrastructure.

Read the docs