Abstract: In recent decades, the growing use of predictive, data-driven algorithms has posed numerous ethical challenges for society, calling for urgent reassessments of our models and the outcomes they produce. In light of the grave social injustices caused by algorithmic decision-making, the notion of algorithmic fairness has become a more pressing topic and an emerging area of research. However, despite the tremendous progress that has been made in this space, most existing accounts of algorithmic fairness see it and define it as a mere statistical means to produce fair outcomes. Thus, making them subject to synonymizing distinct notions of fairness and vulnerable to the same problems they aim to solve. For this reason, we ought to understand algorithmic fairness as an ends-driven definition that appeals to acceptable standards of justice and is instrumental to generating and employing adequate statistical methods. Only such an account will guarantee the kind of outcomes we can reasonably accept as fair and capture the true point and motivation for algorithmic fairness. The goal of this paper is to distinguish between procedural and substantive fairness in the context of algorithms; demonstrate how statistical definitions miss the point of algorithmic fairness; and provide a more compelling, ends-driven account that is grounded on American anti-discrimination laws and supported by Rawlsian principles of distributive justice.  

“…By myopically focusing on quantitative properties of an algorithm in a single, static context, we are ignoring aspects of the problem that are vital to understanding fairness: the downstream and upstream effects of the decisions that we make. After all, when we use algorithms not just to make predictions but to make decisions, they are changing the world in which they operate, and we need to take into account such dynamic effects in order to talk sensibly about something like ‘fairness’…”

The Ethical Algorithm
(Kearns & Roth, 2019)