The consensus from the
techno-legal literature is that the standing provisions of the General Data
Protection Regulation (GDPR) do not offer meaningful protections against legal
harms arising from the process of algorithmic inferences of psychological
traits. However, this literature presupposes that the computational processes
of inference and collection of personal data deserve separate legal treatments.
This opinion makes the provocative argument that despite being computationally
distinct, these two processes must be treated as legally equivalent and
accordingly, inter alia, algorithmic inferences must be subjected to the
rigours of data minimization in the same way as collection of personal data
within the GDPR. In this process, this opinion takes a first principles
approach to furnish the necessary taxonomy and conceptual underpinnings to
ground the legal logic behind recent decision of the Court of Justice of the
European Union (CJEU) in Maximilian Schrems v. Meta Platforms.