Artificial Intelligence (AI) is becoming increasingly prevalent in our daily lives. Although AI systems bring many benefits, several legal and regulatory challenges remain as well. A field that has already attracted much attention is extra-contractual and product liability for damage involving AI systems. Accidents involving self-driving vehicles or surgical robots surely are a reason why tort and product liability are increasingly being discussed. Another reason relates to the intrinsic characteristics of AI such as opaqueness, autonomy, connectivity, data dependency or self-learning abilities, which make it difficult to trace back potentially problematic decisions made with the involvement of such systems. The attention for this academic field will only increase following the recent proposal by the European Commission (EC) regarding new liability rules for AI. Instead of focusing on one specific element, giving a rather general overview of the impact of AI on tort law or discussing the new rules, this article will take a more conceptual perspective and show that each policy decision regarding tort liability for damage involving AI actually entails several additional choices to be made. We will illustrate this with two use cases, namely the allocation of the burden of proof in an AI-context on the one hand and the adoption of strict liability regimes on the other hand. This starting point is important as normative recommendations and proposals risk to be one-layered and, consequently, too ‘simple’ or not realistic to directly implement. Our research emphasizes the ‘multi-faceted’ reality of any proposal regarding the adoption of a new liability regime or modification to existing rules.