This paper investigates the perspectives of four large language models – i.e., Llama, ChatGPT, Gemini, and Claude – on the European Union’s regulation on Artificial Intelligence (AI Act). Through a series of semi-structured interviews, the study investigates the concerns, sentiments, and viewpoints expressed by these AI entities regarding the regulatory framework established by the AI Act. The analysis employs text analysis techniques, including word cloud visualization and sentiment analysis, to identify prevalent themes, normative considerations, apprehensions, and perceived implications on the regulation, across the different artificial intelligence models. The findings reveal a spectrum of responses: some models express reservations about potential constraints on innovation and technological development, whereas others see the regulation as a constructive instrument for promoting responsible AI practices. By providing a distinctive insight into the perspectives of AI systems, the paper highlights areas of alignment with regulatory objectives as well as factors that elicit caution or opposition. Overall, the study contributes to a deeper understanding of AIs and regulatory approaches to technological markets, in the complex interplay between innovation and societal safeguards.
European Business Law Review