Comparative Torts – Liability for AI, by Emmanuelle Lemaire

The BACL Annual Seminar, chaired by Professor TT Arvind (York University), took place online on 6th September 2022 at the start of Society of Legal Scholar’s Annual Conference at King’s College London. The video recording is available here. Professors Simon Chesterman, Bernhard Koch and Ugo Pagallo introduced and discussed current and prospective approaches to reforming tort law within and outside the European Union in light of the challenges presented by the regulation of AI-systems. Their discussion focused on identifying the policy objectives and driving factors behind the different tools employed by legal systems to regulate AI-systems as well as highlighting techniques and factors of legal development more specifically in the field of tort law.

Professor Bernhard Koch (University of Innsbruck) laid the ground for the subsequent discussion by examining the challenges that AI-systems currently pose to the existing liability regimes applied in EU Member States. Using the example of autonomous cars, he considered whether the existing liability regimes were sufficiently robust to extend to damages involving AI-systems. He first highlighted the diversity in the EU Member States’ regimes addressing damages caused by “conventional” cars, before explaining how these regimes may apply to a damage caused by autonomous cars. A core challenge arising from liability for damage caused by autonomous cars is the identification of additional ‘players’, such as software developers, digital content providers and hackers, as they make the liability enquiry much more complex, whether in terms of chains of causes and effects or in relation to cases of distributed responsibility. The latter cases concern situations whereby no single action is neither illegal nor immoral per se but where the damageable outcome stems from the aggregate actions. In these situations, it will not only be difficult to pinpoint the precise source of the damage but also to attribute responsibility as no single action may trigger liability per se.

Despite the complexity of the legal inquiry, Professor Koch argued that accidents involving AI-systems could fall within the scope of existing liability rules, and that there was no compensation gap as such in domestic laws. He suggested however that their application might not lead to a desirable outcome and that adjustments would, in any case, be needed. Thus liability regimes based on the fault of the driver will need to be adjusted simply due to the absence of a human driver in autonomous cars. Strict liability regimes, such as that of the keeper of the car (or equivalent), will be more robust if applied to autonomous cars because they are triggered by the ‘operation’ or ‘involvement’ of the car but even then, they are also likely to require some adjustment. Finally, Professor Koch strongly argued in favour of reforming the strict liability regime of the producer, which results from the Product Liability EU Directive, in order to clarify, among other things, whether ‘digital components’ are included in the definition of the ‘product’. In this respect, it is worth noting the recent publication on 28 September 2022, of the EU proposal for a new product liability directive, which deems software and manufacturing files to be ‘products’ so that the Directive regime of strict liability can apply to them. 

Professor Ugo Pagallo (University of Turin) then identified the diverse strategies considered at EU level to regulate AI-systems through tort law. He argued that tort law, as a regulatory tool, has been primarily employed to respond to situations of overuse and misuse of AI-systems when situations of underuse of AI systems should also be considered for regulation. He contrasted the reasons, such as public distrust, the lack of infrastructure or incentive to use AI for deciding not to use AI-systems, with the potential or existing losses due to technological underuse in various fields (e.g. in the health sector). With situations of overuse and misuse of AI in mind, Professor Pagallo then distinguished between short-term and mid-term strategies to tackle challenges posed by AI-systems. For this purpose, he drew  upon his personal experience as a member of the high-level expert group set up by the European Commission to elucidate the challenges of ‘liability for AI and other emerging technologies’ (see the group report published in November 2019). He noted that the extension of current liability regimes was the first strategy favoured by the group to respond to the short-term challenges posed by AI-systems. The extension of duties of care and the theory of agency, as well as changes regarding the burden, onus and standard of proof were good illustrations of this strategy. However he argued that the extension of existing doctrines was unlikely to respond to all the challenges posed to tort law in the short-term. For example, in the case of hacking, the identity of hackers is often secret and victims of cyberattacks may be left without compensation because of their inability to identify the tortfeasors. To address these specific challenges, Professor Pagallo outlined other solutions considered at EU level, such as the creation of a compensation scheme to bridge compensation gaps.

Turning to mid-term challenges, Professor Pagallo referred to more radical solutions contemplated at EU level, such as the possibility to adopt a new ‘electronic personhood’ for AI. While he recognised that this solution has been discarded within the EU (for now), he also pointed out that other legal systems have made a different choice. For example, Delaware and China have already granted a new legal status to robots. Well aware of the controversy surrounding the question of electronic personhood for AI in European countries, Professor Pagallo laid out the reasons why such a radical and controversial proposal was even considered in the first place. He argued that the proposal was made in light of the specific challenges posed by ‘third-order technology’, a term he used to refer to technology that interacts with other technology in an environment determined by technology, such as in the situation of high-frequency trading systems. In the context of third-order technology, humans are no longer needed in the loop for the purpose of decision-making, and this is ‘a game-changer’. Additionally, situations of distributed responsibility are bound to increase. Accountability in these situations is particularly difficult to identify. Despite the immense challenges faced at EU level, Professor Pagallo concluded on the willingness of the EU to experiment in the legal field, as attested by the regulatory sandboxes on AI.

Finally, Professor Simon Chesterman (University of Singapore) considered the experience of jurisdictions outside the EU in order to identify the policy factors underpinning the regulation, outside the EU, of liability for damage caused by AI-systems. The questions were why, when and how to regulate AI-systems. As to the ‘why’ question (why regulate?), Professor Chesterman examined the pros and cons of regulating AI-systems. The need to correct market failures, for example, supported regulation through tort law. The undesirable consequences of regulation, such as the constraint on innovation or the loss of a competitive advantage, were reasons to refrain from regulating. Professor Chesterman contrasted the USA’s reluctance to regulate AI-systems for fear of constraining innovation with the EU’s strong appetite for the regulation of AI-systems in a bid to preserve consumers’ rights. By comparison, China’s position has changed in recent years, moving from its traditional ‘sovereign approach’ to fall somewhere between the USA and European approaches, at least where data protection is concerned. In response to the ‘when’ question (when to regulate?), Professor Chesterman noted that answers to that question were subject to a careful balancing exercise regarding the benefits and costs of regulating AI-systems, particularly in light of the need to support innovation. Addressing finally the ‘how’ question (how to regulate?), Professor Chesterman outlined three broad approaches to regulating AI-systems – the ‘management of risks’ approach, the ‘red line’ approach and the ‘process legitimacy’ approach – before considering the tools that tort law can offer. The difficulties for the application of the tort of negligence or the liability for defective products to situations involving AI-systems led him to contemplate alternative solutions such as the adoption of no-fault liability regimes complemented by mandatory insurance.


The Q&A session chaired by Professor TT Arvind (University of York) provided the opportunity for speakers to comment on each other’s presentation and to answer the good questions raised by the audience. All three speakers agreed that while AI-systems pose great challenges, these challenges are certainly not insurmountable. In fact, one cannot help but feel reassured that current systems of liability appear sufficiently robust to adapt to most of the liability challenges posed by AI-systems. Liability regimes will require adjustments to cope with specific challenges but truly innovative and radical solutions (such as the adoption of an electronic personhood for AI) do not appear to be yet needed. The discussion also made clear that we cannot hope for a ‘one-size-fits-all’ solution; indeed, the array of solutions available and contemplated in EU and non-EU legal systems are indicative of the distinct policy objectives pursued. The first policy objective consists in deciding whether or not to adopt new regulations for the use of AI-systems.  Some jurisdictions, such as the USA and some Asian legal systems, appear content to wait before taking substantial regulatory steps, in order to privilege economic freedom and unconstrained innovation. Other jurisdictions, such as the EU, aim to get ahead of the legal challenges of tomorrow, thereby favouring the provision of legal certainty to victims (and potential tortfeasors!). In this respect, the recent proposals for two EU Directives, one on the liability for defective products and the other on the adaptation of non-contractual civil liability rules to AI, published on 28 September 2022, provide much needed clarity on specific points, such as whether a digital component can be classified as a product.

Posted by Dr. Emmanuelle Lemaire (University of Essex).