AI regulation is one of the hot topics of today. In the EU, the European Commission and the European Parliament suggest introducing strict liability rules on operators of high-risk AI systems. To create a suitable liability regime, we must consider what makes AI systems different from their non-AI-counterparts. In our article, we identify AI’s novel approach to problem-solving and the potential for (semi-)autonomous decision-making as key issues for liability. However, the deployment of AI per se will not prove necessarily riskier than the human alternative – in contrast; they might actually be safer. Introducing strict liability is usually justified when the regulated activity poses an inherent risk despite the application of reasonable care. This stands in contradiction with the generally safer use of AI. The dangers posed by AI for liability do not necessarily coincide with cases associated with inherent riskier situations regulated by strict liability regimes. In our article, we argue that when formulating a liability regime for AI, we need to consider which aspects of AI prove particularly challenging to liability. More specifically, we need to evaluate whether introducing strict liability for specific AI systems is always appropriate, especially when taking into account that deploying AI does not necessarily pose the inherent risks usually regulated by strict liability regimes.

By Miriam Buiten & Jennifer Pullen[1]

 

I. OPEN QUESTIONS ON LIABILITY

ACCESS TO THIS ARTICLE IS RESTRICTED TO SUBSCRIBERS

Please sign in or join us
to access premium content!