Chatbots – Die Geister, die ich rief…

Back in 1966, MIT professor Joseph Weizenbaum developed a comparatively simple program called ELIZA, which performed natural language processing. ELIZA was initially published to show the superficiality of communication between man and machine but ended up surprising a considerable number of individuals, who attributed human-like feelings to the computer program.

Half a century later chatbots are technically advanced enough to appeal to a broader audience and are increasingly used to handle communications with customers, operating in absence of a clear legal framework for their use.

But, can a chatbot make a legally binding declaration of intent on behalf of a company, given that declarations under the law are to be performed only by natural persons or legal entities?

There is broad legal consensus that – at least for automated chatbots – this is practically a non-issue, as the declaration of a chatbot can be always attributed to its operator. With automated chatbots, declarations of intent are generated based on predefined settings, i.e. computer declarations which may not be explicitly regulated by law but are nevertheless legally binding.

Although the will to act, which is necessary for a legally binding declaration of intent, is not present at the time a computer declaration is generated, proof of intent is provided through the activation of the chatbot by the operator.

Legal scholars have in fact constructed the presence of all requirements necessary for a  declaration of intent to be legally binding: (a) awareness of intention and (b) the will to engage in a transaction. Due to the automation, both requirements may not present when a chatbot generates a declaration of intent; Ultimately, though, they are both satisfied, since they can be traced back to the human operator.

LIABILITY IN The Age of AUTONOMY

The Sorcerer’s Apprentice, Illustration of Ferdinand Barth, 1882.

The above construction of a computer declaration, however, presents certain limitations in regard to autonomous chatbots. In contrast to automated chatbots, autonomous chatbots make decisions using self-learning algorithms. Here, artificial intelligence is used and the chatbot operator no longer has any direct influence on the results and, as a rule, cannot even verify the decisions that are made.

Against this background, the correlation between the actions of the system operator and that of the chatbot does not seem satisfactory, and subsequently the principles of a computer declaration no longer apply.

At present, autonomous systems are still in an early phase of development, so that this restriction has little practical relevance. However, this is bound to change more sooner than later, and will require legislative adjustments.

One of the main issues to be addressed, here, is whether a tortuous act performed by the chatbot is due to human error, for example the incorrect programming of the chatbot. While with automated chatbots it seems possible to attribute the tortuous act to the actual cause, this becomes more difficult to prove with increasingly autonomous chatbots.

In questions of liability relating to the use of chatbots and similar systems, the injured party faces the problem of having to prove possible neglect of duty or system errors. With the increasing complexity of systems, this is a huge challenge and a considerable obstacle if the injured party wants to assert its claims successfully.

For this reason, some believe that the burden of proof should be carried by the manufacturer or operator of the system. This implies that a manufacturer or operator must prove that there was no misconduct on their part, and that they have exercised proper diligence in programming and operating the system.

A so-called objective liability is also being considered in connection with automated systems. The liability gap created by the complexity of automated systems, no longer allowing for “actions” to be easily attributed to a natural or legal entity, could be closed by holding operators liable for damages caused by their system, whether they are to blame or not.

Last comes the ground-breaking – yet distant – option for attributing a distinct legal personality to automated chatbots. In fact, the more self-learning systems become independent from the originally intended and programmed approach, the louder the demand is to grant them their own legal personality, at least in respect with liability issues. As a consequence, any damage caused by such a system would have to be compensated by the system itself. This could be done by means made available by the operator or the manufacturer.

An interim step, broadly contemplated by legal scholares, would be introducing a compulsory insurance policy, to cover damages caused by either automated or autonomous systems. This prerequisite is already a rule, when large market players contract with chatbot manufacturers.

As the case may be, chatbots are here to stay, to provide an enhanced user experience and give a new soul to daily interactions, or take what’s left of it. Chatbot manufacturers and operators should hence be well prepared, by drafting an inclusive End User’s License Agreement and having all necessary policies in place to ensure that their broom is timely stopped, before the floor is awash with water.