Bias Audit and Mitigation in Julia

07/29/2021, 1:40 PM1:50 PM UTC


This talk introduces Fairness.jl, a toolkit to audit and mitigate bias in ML decision support tools. We shall introduce the problem of fairness in ML systems, its sources, significance and challenges. Then we will demonstrate Fairness.jl structure and workflow.


Machine Learning is involved in a lot of crucial decision support tools. Use of these tools range from granting parole, shortlisting job applications to accepting credit applications. There have been numerous political and policy developments during the past one year that have pointed out the transparency issues and bias in these ML based decision support tools. Thus it has become crucial for the ML community to think about fairness and bias. Eliminating bias isn't easy due to the existence of various trade-offs. There exist various performance-fairness, fairness-fairness (various definitions of fairness might not be compatible) trade-offs.

In this talk we shall we shall discuss

  • Challenges in mitigating bias
  • Metrics and fairness algorithms offered by Fairness.jl and the workflow with the package
  • How Julia's ecosystem of packages (MLJ, Distributed) helped us in performing a large systematic benchmarking of debiasing algorithms, which helped us understand their generalization properties.

Repository: Fairness.jl

Documentation is available here, and introductory blogpost is available here

Platinum sponsors

Julia Computing

Gold sponsors

Relational AI

Silver sponsors

Invenia LabsConningPumas AIQuEra Computing Inc.King Abdullah University of Science and TechnologyDataChef.coJeffrey Sarnoff

Media partners

Packt Publication

Fiscal Sponsor