Mojo🔥 24.3 is now available for download and this is a very special release. This is the first major release since Mojo🔥 standard library was open sourced and it is packed with the wholesome goodness of community contributions! The enthusiasm from the Mojo community to enhance the standard library has been truly remarkable. And on behalf of the entire Mojo team, we’d like to thank you for all your feedback, discussion and, contributions to Mojo, helping shape it into a stronger and more inclusive platform for all.
Special thanks to our contributors. Thank you for your PRs: @LJ-9801 @mikowals @whym1here @StandinKP @gabrieldemarmiesse @arvindavoudi @helehex @jayzhan211 @mzaks @StandinKP @artemiogr97 @bgreni @zhoujingya @leandrolcampos @soraros @lsh
In addition to standard library enhancements, this release also includes several new core language features and enhancements to built-in types and collections that make them more Pythonic. Through the rest of the blog post, I’ll share many of the new features with code examples that you can copy/paste and follow along. You can also access all the code samples in this blog post in a Jupyter Notebook on GitHub. As always, the official changelog has an exhaustive list of new features, what’s changed, what’s removed, and what’s fixed. And before we continue, don’t forget to upgrade your Mojo🔥. Let’s dive into the new features.
Enhancements to List, Dict, and Tuple
In Mojo 24.3 collections (List, Dict, Set, Tuple) are more Pythonic than ever and easier to use if you’re coming from Python. Many of these enhancements have come directly from the community:
- List has several new methods that mirror Python API, thanks to community contributions from @LJ-9801 @mikowals @whym1here @StandinKP.
- Dict can now be updated thanks to contributions from @gabrieldemarmiesse.
- Tuple now works with memory-only element types like String and allows you to directly index into it with a parameter expression.
One of the best ways to learn new features is to see them in action. Let’s take a look at an example that makes use of these types. In the example below we implement a simple gradient descent algorithm, which is an iterative algorithm used to find the minima of a function. Gradient descent is also used in most machine learning algorithms to minimize the training loss function by updating the values of function parameters (i.e. weights) iteratively until some convergence criterion is met.
For this example, we choose the famous Rosenbrock function to optimize, i.e. find its minima. Rosenbrock is an important function in optimization because it represents a challenging landscape with a global minimum at ((1,1) for 2-D Rosenbrock) that is difficult to find. Let’s take a look at its implementation and how we make use of Mojo’s shiny new List, Dict, and Tuple enhancements. First, we define the Rosenbrock function and its gradient:
We use Tuple to return the gradients using (dx, dy). Notice that for the return type we use Tuple[Float64, Float64], we can also write it more simply as (Float64, Float64) using parentheses just like in Python. Now we can write the gradient descent iteration loop and we'll use the parentheses style for Tuple. This simplifies compare List[Tuple[Float64, Float64, Float64]]() vs List[(Float64, Float64, Float64)]() below:
Here we use List to store gradients and function evaluation at each iteration using history.append((x, y, rosenbrock(x, y)))
We also capture the Tuple output of rosenbrock_gradient in grad. You can index into grad to access dx = grad[0] and dy = grad[1] which we use to update x and y.
Finally, we call the gradient_descent function with a dictionary of parameters params:
Output
Note: plot_results() is a Python function that I call from Mojo using Mojo-Python interop. Since we capture all the iterations of (x,y) in our List[Tuple] variable history we have all the information we need to generate these plots below. We do, however need to covert history into a NumPy array before we call our Python function, which is what we do in the for loop at the end. You can find the implementation of this function on GitHub along with rest of the code.
In the plot above (click to zoom) you can see that we start at an initial point (0,3) and gradient descent takes us to the global minima at (1,1)
We can use the new update() function in Dict to update using params.update(new_params) to change our initial point to (-1.5,3) and re-run the optimization:
Output
With the new initial point you can see in the contour plot (click to zoom), that due to the narrowness of the valley, the gradient descent algorithm overshoots the minimum and bounces back and forth across the valley, causing oscillations. Such problems are common in numerical optimization problems and can lead to slow or premature convergence. This tells us that we can explore different learning parameters (or hyperparameters in machine learning) or other types of optimizers to converge faster.
Enhancements to Set
Sets are unordered collections of unique elements, allowing for efficient membership tests and mathematical set operations. An example of using Sets can be to identify unique genetic markers or species from a large dataset of DNA sequences. Sets can automatically handle duplicates, and offer efficient operations for mathematical set concepts like unions, intersections, and differences.
In this release, Set introduces named methods that mirror operators, thanks to contributions from @arvindavoudi
Let’s take a look at a simple example that compares both operator based and the new method based operations on the set. Let’s define two sets with different genetic markers:
We can use both difference method and difference operator to subtract both sets:
Output
Similarly, we can perform intersection_update using the method and the operator &= :
Output
Finally, we can use the new update method to update a set:
Output
New reversed() function for reversed iterator
This release includes a new reversed() function for reversed iterators thanks to community contribution from @helehex @jayzhan211. In this example below, we reverse the words in a sentence using the new reversed iterator and by using List’s reverse() method and compare their results:
Output
reversed() function for reversed iterator also supports Dicts.
New parametric indices in __getitem__() and __setitem__(), and Dict, List, and Set conform to the new Boolable trait.
Mojo 24.3 also includes new core language enhancements and introduces a new Boolable trait. In the example below we’ll explore both these features. We’ll create a struct called MyStaticArray whose size is known at compile time. For the MyStaticArray struct we can choose to define parametric indices __getitem__[idx: Int](self) and use compile-time checks on the requested index. This is in contrast to using __getitem__(self, idx: Int) which can be used when the size of the array is unknown at compile time. Let’s take a closer look at the struct:
Let’s instantiate the MyStaticArray and since it conforms to Boolable trait, we can check if the array is empty. We can use Bool(arr) or just arr in the if condition, we show both approaches below:
Output
Now let’s try to get an item from the MyStaticArray whose index is larger than the size of the array.
Output
This fails the constrained[idx<size, …]() test in __getitem__[]() function.
But wait, there is so much more!
Mojo 24.3 is a huge release and I barely scratched the surface in this blog post. While this blog post was focused on community contributions, standard library enhancements, and a few core language enhancements, there is a lot more in this release that also caters to low-level system programming. I encourage you to check out the detailed list of what’s new, changed, moved, renamed, and fixed, check out the changelog in the documentation. A few other notable features from the changelog:
- Core Language: Improvements to variadic arguments support.
- Core language: Allows users to capture the source location of code and call the location of functions dynamically using the __source_location() and __call_location() functions.
- Standard Library: FileHandle.seek() now has a "whence" argument similar to Python.
- Docs: New Types page.
MAX 24.3 is also available for download today and includes several enhancements including preview of Custom Operator Extensibility support which allows you to write custom operators for MAX models using the Mojo for intuitive and performant extensibility. Read more in the What’s new in MAX 24.3 blog post.
All the examples I used in this blog post are available in a Jupyter Notebook on GitHub, check it out!
- Download MAX and Mojo.
- Head over to the docs to read the Mojo🔥 manual and learn about APIs.
- Explore the examples on GitHub.
- Join our Discord community.
- Contribute to discussions on the Mojo GitHub.
- Read and subscribe to Modverse Newsletter.
- Read Mojo blog posts, watch developer videos and past live streams.
- Report feedback, including issues on our GitHub tracker.
Until next time! 🔥