Fair machine learning frameworks are normative models that specify and guide the implementation of non-discrimination principles in machine learning (ML) systems. The dominant methodological approach involves (i) defining a fairness metric, the maximum value of which constitutes a target, an end-state of "ideal fairness", and (ii) applying a "bias mitigation" method that improves the system against this metric. Recent works have charged severe critiques against existing proposals in fair ML, attributing many alleged shortcomings to the dominant "idealized" methodology therein. These charges echo critiques of so-called "ideal theory" in political philosophy. I review methodological critiques of fair machine learning and contextualize them against the background of the "ideal theory" debate, drawing lessons for "nonideal" approaches to fair machine learning.
|Number of pages||1|
|Publication status||Published - 27 Jul 2022|
|Publication type||Not Eligible|
|Event||AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society - Oxford, United Kingdom|
Duration: 1 Aug 2022 → 3 Aug 2022
|Conference||AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society|
|Period||1/08/22 → 3/08/22|