All Articles  (X)

Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

🚨

NEW

Product

What’s the difference between the AI Engine and Mojo?

‍On May 2nd, we announced our next-generation AI developer platform with two exciting breakthrough technologies — the Mojo programming language and the Modular AI Engine. In just over two months, more than 110k developers have signed up for the Mojo Playground to learn Mojo and experience its performance firsthand, over 30k developers have signed up to our waitlist for the AI engine, and our Modular community on Discord has grown to 17k developers! We’re incredibly excited to see developers sharing their experience with Mojo, providing product feedback, and learning from each other.

July 11, 2023

/

Eric Johnson

Shashank Prasanna

Read

🚨

NEW

Engineering

Modular natively supports dynamic shapes for AI workloads

Today’s AI infrastructure is difficult to evaluate - so many converge on simple and quantifiable metrics like QPS, Latency and Throughput. This is one reason why today’s AI industry is rife with bespoke tools that provide high performance on benchmarks but have significant usability challenges in real-world AI deployment scenarios.

June 22, 2023

/

Eric Johnson

Kate Caldwell

Read

🚨

NEW

Industry

Do LLMs eliminate the need for programming languages?

We’re very excited about the positive reception of Mojo since its launch as well as the community of people building around it. Given new Large Language Model (LLM) powered developer tools like Copilot and Ghostwriter, many developers are wondering about the future of programming – do programming languages still matter when AI writes the code?

June 8, 2023

/

Chris Lattner

Read

🚨

NEW

Product

Accelerating AI model serving with the Modular AI Engine

A few weeks ago, we announced the world’s fastest unified AI inference engine. The Modular AI Engine provides significant usability, portability, and performance gains for the leading AI frameworks — PyTorch and TensorFlow — and delivers world-leading execution performance for all cloud-available CPU architectures.

June 1, 2023

/

Alexandr Nikitin

Eric Johnson

Read

🚨

NEW

Company

Our launch & what's next

Last week, we launched Modular to the world after more than 16 months in stealth. We started Modular with a deep conviction — after 6+ years of building and scaling AI infrastructure to billions of users and 20+ years of building foundational compute infrastructure — it was clear the world needed a better path forward. Everyone wants less complexity, better access to compute and hardware, and the ability to develop and deploy AI faster.

May 11, 2023

/

Tim Davis

Read

🚨

NEW

Product

A unified, extensible platform to superpower your AI

We’re excited to finally share what we’ve been building at Modular. This announcement begins Modular’s journey to radically change the nature of AI programmability, usability, scalability, and compute.

May 2, 2023

/

Chris Lattner

Tim Davis

Eric Johnson

Read

🚨

NEW

Engineering

The world's fastest unified matrix multiplication

In this post, we describe Modular’s approach to solving this problem and its game-changing benefits, including a new standard in state-of-the-art (SOTA) performance on CPU as compared to existing solutions.

April 20, 2023

/

Abdul Dakkak

Chad Jarvis

Eric Johnson

Hengjie Wang

Ian Tramble

Read

🚨

NEW

Engineering

AI’s compute fragmentation: what matrix multiplication teaches us

AI is powered by a virtuous circle of data, algorithms (“models”), and compute. Growth in one pushes needs in the others and can grossly affect the developer experience on aspects like usability and performance. Today, we have more data and more AI model research than ever before, but compute isn’t scaling at the same speed due to … well, physics.

March 23, 2023

/

Eric Johnson

Abdul Dakkak

Chad Jarvis

Read

🚨

NEW

Company

We want to hear from you

At Modular, we are rebuilding AI infrastructure for the world. Our goal is to move past AI tools that are themselves research projects and into a future where AI development and deployment are orders of magnitude more efficient for everyone. You should be able to do this without trading off performance or having to rewrite your entire code base.

December 15, 2022

/

Eric Johnson

Read

🚨

NEW

Engineering

If AI serving tech can’t solve today’s problems, how do we scale into the future?

The technological progress that has been made in AI over the last ten years is breathtaking — from AlexNet in 2012 to the recent release of ChatGPT, which has taken large foundational models and conversational AI to another level.

December 8, 2022

/

Eric Johnson

Tim Davis

Read

🤔

No results for this query