Empirical evidence is mounting that artificial intelligence applications threaten to discriminate against legally protected groups. This raises intricate questions for EU law. The existing categories of EU anti-discrimination law do not provide an easy fit for algorithmic decision making. Furthermore, victims won’t be able to prove their case without access to the data and the algorithmic models. Drawing on a growing computer science literature on algorithmic fairness, this article suggests an integrated vision of anti-discrimination and data protection law to enforce fairness in the digital age. It shows how the concepts of anti-discrimination law may be combined with algorithmic audits and data protection impact assessments in an effort to unlock the algorithmic black box.
Common Market Law Review