What's (Not) Ideal about Fair Machine Learning? Abstract

Research output: Other conference contributionAbstractScientific


Fair machine learning frameworks are normative models that specify and guide the implementation of non-discrimination principles in machine learning (ML) systems. The dominant methodological approach involves (i) defining a fairness metric, the maximum value of which constitutes a target, an end-state of "ideal fairness", and (ii) applying a "bias mitigation" method that improves the system against this metric. Recent works have charged severe critiques against existing proposals in fair ML, attributing many alleged shortcomings to the dominant "idealized" methodology therein. These charges echo critiques of so-called "ideal theory" in political philosophy. I review methodological critiques of fair machine learning and contextualize them against the background of the "ideal theory" debate, drawing lessons for "nonideal" approaches to fair machine learning.
Original languageEnglish
Number of pages1
Publication statusPublished - 27 Jul 2022
Publication typeNot Eligible
EventAAAI/ACM Conference on Artificial Intelligence, Ethics, and Society - Oxford, United Kingdom
Duration: 1 Aug 20223 Aug 2022


ConferenceAAAI/ACM Conference on Artificial Intelligence, Ethics, and Society
Abbreviated titleAIES
Country/TerritoryUnited Kingdom
Internet address


Dive into the research topics of 'What's (Not) Ideal about Fair Machine Learning? Abstract'. Together they form a unique fingerprint.

Cite this