Today Modular launched the Mojo 🔥 language for local download, so we couldn't help ourselves and gave it a spin on Parca and Polar Signals Cloud - spoiler, it's awesome!
Mojo promises to bring the performance of C to Python-like workloads, specifically aimed at the AI space. When performance is involved, we can't help ourselves and have to have a look.
How does it work?
Mojo 🔥 uses LLVM under the hood - unsurprising since Chris Lattner, founder of Modular, is an LLVM co-founder. LLVM is the same set of compilers, that among others, clang uses. Clang has long been supported by Parca and Polar Signals Cloud, so we were confident that it was going to work for Mojo, but we wouldn't believe it until we saw it.
The FAQ was also promising:
Is Mojo interpreted or compiled?
Mojo supports both just-in-time (JIT) and ahead-of-time (AOT) compilation. In either a REPL environment or Jupyter notebook, Mojo is JIT’d. However, for AI deployment, it’s important that Mojo also supports AOT compilation instead of having to JIT compile everything. You can compile your Mojo programs using the mojo CLI.
It turns out it's very easy, all you need to do is enable debuginfos to be added to the binary using the `--debug-level` flag (we'll use the mandelbrot example, because who doesn't love a good mandelbrot fractal?):
$ mojo build --debug-level line-tables mandelbrot.mojo
And run it:
./mandelbrot
While having the Parca Agent profiler running and sending data to Polar Signals Cloud:
sudo parca-agent --remote-store-address=grpc.polarsignals.com:443 --remote-store-bearer-token=<your token>
And that's it, everything happens automatically.
This all works because of the eBPF-based DWARF stack unwinder, combined with debuginfod support (this is why things like libc are correctly symbolized even if we don't have debuginfos for it in the production environment).
Try it yourself!
Check out Modular's announcement, download Mojo, and start improving your Mojo AI workloads with Parca and Polar Signals Cloud!