News

Artificial intelligence: When robots conclude contracts

29.08.2017

The increasing digital transformation of production processes, goods and services – the ‘fourth industrial revolution’ – is ubiquitous. The networking of machines, warehouse systems and operating resources within companies and outside them, via the Internet of Things, harbours great economic potential and is becoming an increasingly important success factor. (Production) processes are made faster, daily life simplified and general safety increased by limiting human error. One aim is the ‘smart factory’, in which the product being manufactured knows its manufacturing information itself, communicates with production resources in the factory and thus controls the manufacturing process itself. Artificial intelligence enables the networked objects to interact with each other in ‘M2M communication’. This increasingly covers the automated and autonomous conclusion of contracts: these can relate to the means of production, spare parts or maintenance services, for example.

Integration into the framework of contract law

However, it is hard to integrate machine contract conclusion into the framework of contract law. Contracts are based on declarations of intent which under current law can only be given by humans. It is therefore crucial whether the operation of the machine is classified as a declaration of a human (in the background) or can at least be attributed to a human. A distinction must be made between declarations by automated systems on the one hand and autonomous systems on the other.

Contract conclusion in automated systems

Automated systems are deployed in many variants. For example, an internet-enabled fridge can independently order milk as soon as stocks run low, and in automatic warehouse systems, re-orders can be placed on an ongoing basis. An automated online store independently checks stock levels when an order is received. Only after a positive result of an internal checking process does the system accept the contract in a legally binding manner.

All these automated systems have one thing in common: the criteria and conditions for an action by the system are pre-set by the user via the device or software. If the relevant criteria are met, the system responds independently and submits a declaration to the contractual partner. The result and thus the content of the declaration is therefore pre-determined. Although the computer system submits the declaration, admittedly by itself without parallel human involvement, the declaration is generated and electronically transmitted according to predefined parameters.

In these cases, this is seen as a computer declaration which meets all requirements of a ‘normal’ declaration of intent in person by the user inputting the data; in other words an intention to act, awareness of the declaration and intention to carry out a transaction. As soon as the user possesses the general intention at the time of commissioning and configuration of the system to bring about a legal consequence and simply leaves the system to determine the moment and specific circumstances such as the number of products to be ordered, the corresponding declaration can be attributed to the user , since the declaration process is merely extended by the division of work. There appears to be no objection to the attribution of the declaration even if the user, upon commissioning the system, would have been able to recognise and avoid their action leading to a subsequent declaration of intent.

Overall it can be said that automated declarations can be tackled with the existing legal principles.

Conclusion of contracts in autonomous systems

In contrast, autonomous systems are ‘intelligent’ in that they act in a targeted manner and are capable of learning, and can interact with other intelligent systems. They are able to expand their knowledge and thus change existing rules independently.

The content of a contract concluded via an autonomous system can thus be the result of a complex calculation process. While the user of an automated system knows, or in any case can know, the specific content of his declaration, or at least the key parameters of it, this is specifically not the case with autonomous systems. This applies especially when the range of possible declarations is increased by interaction with other systems and other extrinsic factors, and the content of the specific declarations is therefore not foreseeable. 

Autonomous declarations cannot therefore easily be attributed to a corresponding act of will by the system user. At the same time, autonomous systems have no legal personality of their own in the current legal framework, and therefore do not have legal capacity and cannot submit their own declarations of intent either. However, for the effective conclusion of a contract, each declaration by such a system must be attributable to a legal or natural person. There is considerable legal uncertainty about the issue of attribution of autonomous declarations.

Possible solutions

Various approaches to solving the attribution issue are being discussed by experts in German jurisprudence on the basis of existing legal concepts.

Agency

One approach is to classify the autonomous system as an agent, section 164 et seq. German Civil Code. To do so, the system would have to submit a declaration of intent in the name of the user within its authorised power of agency. The commissioning of the device could be seen at least as an implied granting of a power of agency.

The prerequisite for agency, however, is that the agent itself submits its own declaration of intent and is legally competent at least in a limited form. The system, however, specifically does not have its own legal personality. Analogous application is unconvincing too: for reasons relating to the protection of the general public, German law provides for personal liability of an unauthorised agent (section 179 German Civil Code). The system itself cannot, however, be made liable because it does not have any assets from which compensation claims could be satisfied.

Capacity as messenger/offerta ad incertas personas

The autonomous system could obviously be classified as a messenger, since a messenger is not required to have legal capacity. However, the activity of the messenger is always limited to simply transmitting a declaration of intent made by another party. The autonomous system, however, itself determines the specific content of the declaration. Therefore a comparison with the construction applied in the case of vending machines of an offerta ad incertas personas does not fit: by setting up and operating the machine, the operator of such a vending machine submits an offer to an undefined group of people; this offer is limited to the available stock and the listed prices. In the autonomous system, however, no pre-produced declarations of intent are stored.

Blank form declarations

In a blank form declaration, the declaring party consciously gives the recipient an incomplete but signed declaration of intent which the third party completes. The declaring party has little or no influence on the final content of the declaration. The declaring party acts consciously and also wants to bring about a legal consequence regardless of the moment in time. He does not, however, have any intention directed towards a specific legal consequence at the time of his last action. Yet such a declaration is still attributed to the declaring party according to the prevailing opinion. The situation is certainly comparable with that of the use of autonomous systems: in both situations, the declaring party consciously gives up the power to make a decision about the final content and accepts that the final declaration does not correspond to his wishes.

It must, however, be taken into account that the use of autonomous systems entails a high risk of abuse. Hacking, viruses or Trojans can impair the system and call into question whether the declaration can always be attributed to the user as a general rule. Even if one wants to impose this risk on the user, in the event of a blank form declaration filled out contrary to the agreement, the subsequent endorser normally has compensation claims against the previous endorser. But such claims again fail in autonomous systems, as the systems have no legal personality. Consumer protection also plays a role if autonomous systems are used in a non-commercial environment. Here there is a risk that users are subject to broad responsibility although they have insufficient knowledge of the technical possibilities and risks.

Contract drafting and recommendations

If automated and/or autonomous systems are used in the context of existing legal relationships, the uncertainty about the attribution of declarations can in particular be countered with appropriate contract drafting. One should define exactly when and in what circumstances a contract arises between those involved. Offer and acceptance must in particular be clearly defined. Doubts about the attributability of declarations can be countered by a precise description of the room for manoeuvre in the systems.

It is also essential, especially on the provider side, to record all procedures comprehensively to prevent difficulties with evidence. How errors are dealt with is also very important, for example if the system orders 1,000 spare parts instead of only one. In this respect, contract law provides for contestation on grounds of error. But in such cases this is often inappropriate, as it is based on the error having been triggered by a customer mistake. In complex systems, however, the trigger is often not clearly identifiable, because this can be rooted in either erroneous data or calculations, for example. It is therefore crucial to check the validity of processes in the system regularly. From a contractual perspective, a mechanism can be designed to let the customer exit the contract more easily.

IT & Outsourcing
Digital Business
Commerce & Trade

Share