Examples of discriminatory algorithmic recruitment of workers have triggered a debate on application of the non-discrimination principle in the EU. Algorithms challenge two principles in the system of evidence in EU non-discrimination law. The first is effectiveness, given that due to algorithmic opacity, the parties in algorithmic discrimination cases do not have easy and unrestricted access to facts enabling them to support their claims. The second is fairness, insofar as the launching and unfolding of the evidentiary debate requires lifting the veil of algorithmic opacity: a colossal task, placing unrealistic burdens of proof on claimants and respondents. Algorithmic discrimination thus seems impossible to prove and, consequently, falls outside the scope of application of EU non-discrimination law. Two possible solutions are proposed. First, as regards effectiveness, a right to access evidence in favour of victims of algorithmic discrimination should be recognized, through a joint reading of EU non-discrimination law and the General Data Protection Regulation. Second, to allocate the burden of proof more proportionately, an extension of the grounds for defence of respondents could allow them to establish that biases were autonomously developed by an algorithm.