Three Liability Regimes for Artificial Intelligence – Algorithmic Actants, Hybrids, Crowds (Bloomsbury 2022), by Anna Beckers and Gunther Teubner 

Who is liable when autonomous digital systems are going astray and causing significant damage? Who should bear the consequences when algorithmic interaction on financial markets causes so-called flash crashes? Who is responsible when robo-advisors give wrong advice for investment decisions? And how should liability be determined when machines interact so closely with humans that it is not possible anymore to delineate individual actions?

In our recent book Three Liability Regimes for Artificial Intelligence, we argue that liability rules should be determined in accordance with the socio-digital institutions that evolve in the interaction between humans and machines. We develop institutionally responsive liability rules with a view to the comparative differences in national legal systems.

1. Law’s responsiveness to socio-digital institutions – this is the main normative claim that we make in this book. Rather than translating the technological features of machine properties directly into the law, we argue that regulation of new technologies needs to be responsive towards the social institutions in which human-machine encounters take place. Jack Balkin has put this very pointedly: ‘What we call the effects of technology are not so much features of things as they are features of social relations that employ those things’. This perspective on socio-digital institutions allows us to overcome several fault lines in the debate on law of AI, in particular the dichotomy between classifying AI as a mere tool or a fully-fledged e-person. Instead of assigning AI either full personhood or treating AI as mere objects, we suggest calibrating the legal status of AI and the liability rules to one of three possible social-digital contexts, what we term “digital assistance”, “digital hybridity” and “digital interconnectivity”.

When autonomous digital technology substitutes decisions by humans, we are in the institutional context of digital assistance – human principals delegate tasks to autonomous digital agents. Digital assistance is grounded in the long-standing social institution of human representation that is now increasingly used in the online world. The specific risk here is that the autonomous digital agent may overstep the limits of his authorization. This calls for rules able to deal with unauthorized digital agency and to provide for a fair assessment of the risks. Applying agency law and vicarious liability is the answer. This area of private law already provides a comprehensive body of rules concerning liability for an agent who issues declarations or breaches a duty of care. These rules need to be adapted to the digital reality in which it is not humans, but digital agents acting. To apply the rules on agency and vicarious liability, it is necessary to confer limited legal personality to the digital agents.  

Digital hybridity denotes the institutions in which human actions and machine operations are so closely intertwined that their individual actions cannot be delineated anymore. In information technology, this type of behaviour is called hybrid machine behaviour. As a socio-digital institution, digital hybridity qualifies as a quasi-organisational association in which humans and machines participate. The specific risk of organizational hybrids stems from the impossibility to delineate the individual actions by the machines and humans. As a result, a responsive liability law will have to attribute the action to the collective and then distribute liability to the central actors behind the association. We suggest applying the principles on enterprise liability here.

Digital interconnectivity is what information technology studies describe as collective machine behaviour. Machine operations are interconnected to an extent that their operations become interdependent. From the perspective of society, such interdependent machine operations are invisible and incomprehensible. Machines operate within the medium of information, not meaning. However, society is regularly exposed to the machine interconnectivity rendering the indirect exposure to the invisible machines a socio-digital institution in its own right. Its specific risks are due to the invisibility and unpredictability of the interconnected machine operations. A liability law which is in a position to handle the exposure to machine interconnectivity is needed. Given the impossibility of delineating individual or collective responsibility it will have to decree risk pools by law. We suggest that the interconnectivity risk requires a fund solution that covers the damages and engages in recovery actions. Such funds need to be set up by those participating in that risk pool.

Digital assistance, hybridity and interconnectivity each require a different liability regime. At the same time and the algorithms will acquire a differential legal status: for vicarious liability the algorithms will need partial legal personhood; for enterprise liability which applies to human-machine associations the machine becomes a member of the association; for collective fund solutions which are needed for machine interconnectivity, the algorithms remain mere objects as parts of the risk pool. Only such a differentiated functional approach will respond adequately to the sector-specificity of digital action.

2. Comparative Sociological Jurisprudence – Our proposals for an institutionally responsive liability law are functional in nature: Our main argument that the law needs to be responsive to its socio-digital context and their specific social risks corresponds with functional approaches which are well-established in the field of comparative law. According to Hugh Collins’ approach, we need to conduct comparative sociological jurisprudence. This brings challenges, however.

For responding to the socio-digital institutions, such an approach seems quite simple at first sight but becomes rather challenging upon closer inspection. On the conceptual level, our suggestions for liability rules – vicarious liability, enterprise liability, fund solutions – do exist across legal systems. All national liability laws comprise rules that establish liability for dangerous things and for the wrongful acts of agents and all systems recognize variations of organizational liability, enterprise liability and rely on fund and insurance solutions in some situations. With our institutional perspective, we are able to connect our proposals to a variety of legal systems. However, when moving from the conceptual level to working out the nitty-gritty doctrine, comparative sociological jurisprudence is a very complex undertaking. To take just one example from our book. The application of vicarious liability and agency law is possible in a variety of legal systems. However, their exact contours are different. In our analysis of the German law and the common law, one example is the different perspective on the rules on contract formation that involve digital agents. The German legal system places the emphasis on the subjective intent of the person issuing a declaration with the courts more recently starting to move in the direction of considering objective factors for identifying the intent. In the common law, the objective interpretation of declarations and contracts is the norm. With this different focus on subjective/objective construction of intent, the question on whether an electronic agent can issue a binding declaration is answered differently. In the German legal system, the objectivization tendencies and the emphasis by the courts on the contextual factors surrounding the algorithmic declaration needed to be discussed. In the common law system, qualifying the algorithmic decision as a declaration as viewed from an objective ‘reasonable observer’ was the main point of discussion.

We also encountered a significant difference between the academic discussions on  AI liability. In some legal systems, notably the US and the German one, academics and practitioners pursue a vibrant and controversial discussion. In contrast, English law seemed to have only started the debate. Therefore, our arguments on the three liability regimes for artificial intelligence had to be framed as a novel contribution within an already advanced doctrinal debate for some legal systems, while being only a first suggestion in others.

Posted by Anna Beckers (Maastricht University) and Gunther Teubner  (Goethe-University of Frankfurt)

Order online at www.bloomsbury.com  – use the code GLR A6AUK for UK orders and GLR A6AUS for US orders to get 20% off!