So, this post is written basically for my own edification, and I thought it might be useful. Anyways, I thought it would be cool to test whether Numba or Julia performance was better. Julia’s documention helpfully provides several functions that are used to test the speed of several common code patterns. I decided to re-write the micro-benchmarks using Numba to see how Julia/Python now compared, as those are the two langagues I feel are best suited/I am most comfertable with. I also compare the results to baseline python.
Each test was run on my Macbook Pro running Julia 0.3 and Anaconda Python2.7. The timing functions were the same that are used in the micro-benchmark implementations.
Most of the implementations are based on the micro-benchmarks provided by the Julia Core team. I may have missed minor preformance optimizations provided by both Numba and Julia, please either open a pull request or get in touch with me if you notice somehthing I could do better. Both results are an average of 5 runs.
Note: while converting the micro-benchmarks, it was difficult to convert the standard python to numba-python, ie, I could not just add
@jit and have my code run correctly.
First of all, converting the python code to Numba was not just a matter of adding @jit, and this represents a significant barrier to usability.
NotImplementedError: offset=83 opcode=0x5e opname=LIST_APPEND
for the implementation of mandelbrot.
This should make clear that the key advantage of Julia is what I’m calling Fast, by Default. As several other tests have shown it is possible to get Python to be faster than Julia, with even semi-trivial amounts of work. However, Julia works out-of-the-box (note the not implemented Numba stuff) and doesn’t require choosing a speed option (Cython vs PyPy vs Numba vs oh god panic). Rather, even though both Numba and Julia represent LLVM frontends with relatively simple syntax, Julia becomes far more useful.