While the Artificial
Intelligence (AI) race is raging on the world market, several voices have
called for effective regulation of this multifaceted technology and its
ever-increasing capabilities. In this context, the European Union has recently
adopted the first-ever comprehensive, binding law on AI, known as the EU AI
Act. This instrument has a dual rationale. On the one hand, it is part of the
European product safety policy and regulates AI systems and models placed on
the internal market according to a risk-based approach. On the other hand, the
AI Act makes the protection of fundamental rights against the harmful effects
of AI a primary objective. Against this background, this article aims to test
this hybrid rationale based on the novel concept of ‘risk (of harm) to
fundamental rights’ introduced in the Act. This seems to combine the classic
product safety risk-based approach with an emancipated version of the human
rights-based approach which originates from the fields of development and
international human rights. This article argues that, in so doing, the AI Act
may shape a new ‘human rights riskbased approach’, which incorporates the
protection of European public interests and extends their scope by translating
them into the language of fundamental rights and values. The article explores
the legal consistency and operational realization of this approach. First, it
undertakes a mapping of fundamental rights’ protection in the letter of the AI
Act and provides for an interpretation of its ratio legis in the light of the
protection of fundamental rights’ narrative. Second, it assesses the ways in
which the protection of fundamental rights could be successfully
operationalized under the AI Act and makes concrete proposals to ensure the
effectiveness of the fundamental rights risk-based approach in the AI context.