Who takes responsibility for errors made by smart robots?

As those of us who are interested in robotics know, ir represents one of the great technological advances of the 21st century. However, for this progress to be properly made, it must be accompanied by a transparent and dynamic regulatory framework that unifies and clarifies the uncertainties it generates. However, today there is no such regulatory framework at national, European or international level.

However, there are two references that are worth considering.

Firstly, the recommendation of the European Parliament (Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) for the establishment of a set of rules on liability. In the face of a possible “new industrial revolution” in which society enters an age of robots, bots, androids and other more advanced forms of AI, it is imperative that the legislator consider the consequences that may result from the use and implantation of these devices in our daily lives.

Also, on the basis that there is no regulation for this niche market, we should mention ISO 13482, restricted to the field of personal care robots. However, this standard does not regulate any type of liability, nor does it address questions about the impact that the use of personal care robots may have on fundamental rights.

However, although there are no regulations in Europe or the USA regulating legal liability in the event of robot error, malfunction, etc., there must always be a person responsible for any acts or injuries caused by the robot as a result of its actions. Hence, when a fault or error occurs, the same rules as for other defective products will apply. That is to say, the regulation that establishes that the responsibility is of the manufacturer will be applied (Spanish: see What happens when a robot causes damage?).

However, the problem arises in those cases in which a manufacturer launches a product whose software and programming has been carried out by a third party. In this area, it is necessary to determine the extent of manufacturer liability. If we require the device to perform the tasks for which it was created with complete certainty (i.e. the robot works perfectly and involves absolutely no risk), we are discouraging technological progress by demanding too precise results without allowing a minimum margin of error and, therefore, innovation will be undermined. Therefore, as Bertolini points out in the RoboLaw project (see RoboLaw Project and this RoboHub blog post; also in Spanish: Automatons and cyborgs, lawless land: who pays if a robot commits a crime?), we need alternative rules that do not punish the manufacturer of the device too much.

As we can see, the legal response to controversies arising from tasks performed by an “intelligent” robot is not as easy and straightforward as with “single-function” robots because, although we apply existing legislation for the protection of consumers and users, “intelligent” robots can probably be customized by the user with functions and applications that a priori were not included in the robot software. Thus, such customization will make it more difficult to determine responsibility in the event of robot failure or error, making it difficult to identify the element that caused the malfunction.

In the same way (even if it is not something viable or forthcoming at the moment), there will come a day when robots can learn from the environment where they operate, interact with it and make decisions not foreseen in their initial configuration; this is what has come to be called the “theory of emerging properties”. In this case, it would be even more complex to delimit responsibility, which is leading to the question of whether learning through intelligent algorithms and operation should be limited only to tasks or functions designed at origin, an option known as “code as law” or “regulation by design“.

From my point of view, if, out of prudence, we restrict the capacities that robots may have in the future and opt for regulation from design (not to be confused with privacy from design), we will be restricting technological evolution and perhaps leading robotics to a future too conservative, which will deprive us of many of the advances that are yet to come.

See also in: