Optimizing Floating Point Math in Julia

07/28/2022, 7:30 PM8:00 PM UTC


Why did exp10 get 2x faster in Julia 1.6? One reason is, unlike most other languages, Julia doesn't use the operating system-provided implementations for math (Libm). This talk will be an overview of improvements in Julia's math library since version 1.5, and areas for future improvements. We will cover will be computing optimal polynomials, table based implementations, and bit-hacking for peak performance.


In this talk we will cover the fundamental numerical techniques for implementing accurate and fast floating point functions. We will start with a brief review of how Floating Point math works. Then use the changes made to exp and friends (exp2, exp10, and expm1) over the past two years as a demonstration for the main techniques of computing functions.

Specifically we will look at:

  • Range reduction
  • Polynomial kernels using the Remez algorithm
  • Fast polynomial evaluation
  • Table based methods
  • Bit manipulation (to make everything fast)

We will also discuss how to test the accuracy of implementations using FunctionAccuracyTests.jl, and areas for future improvements in Base and beyond. Present and future work areas optimized routines are the Bessel Functions, cumulative distribution functions, and optimized elementary functions for DoubleFloats.jl, and PRs across the entire package ecosystem are always welcome.

Platinum sponsors

Julia ComputingRelational AIJulius Technology

Gold sponsors


Silver sponsors

Invenia LabsBeacon BiosignalsMetalenzASMLG-ResearchConningPumas AIQuEra Computing Inc.Jeffrey Sarnoff

Media partners

Packt PublicationGather TownVercel

Community partners

Data UmbrellaWiMLDS

Fiscal Sponsor