This talk introduces Fairness.jl, a toolkit to audit and mitigate bias in ML decision support tools. We shall introduce the problem of fairness in ML systems, its sources, significance and challenges. Then we will demonstrate Fairness.jl structure and workflow.
Machine Learning is involved in a lot of crucial decision support tools. Use of these tools range from granting parole, shortlisting job applications to accepting credit applications. There have been numerous political and policy developments during the past one year that have pointed out the transparency issues and bias in these ML based decision support tools. Thus it has become crucial for the ML community to think about fairness and bias. Eliminating bias isn't easy due to the existence of various trade-offs. There exist various performance-fairness, fairness-fairness (various definitions of fairness might not be compatible) trade-offs.
In this talk we shall we shall discuss
Repository: Fairness.jl
Documentation is available here, and introductory blogpost is available here