Why Artificial Intelligence Will Need a Legal Personality

The development of robotics and artificial intelligence (AI) is an exciting, relentless reality which is slowly making its way out of science fiction movies and into our mundane world.

Furthermore, people and technology are increasingly interacting at an individual, daily level.  The increased occasions of interaction between human and AI systems have great potential not only for economic growth but also for individual empowerment, as explained also in the January 2017 McKinsey Global Institute report, which interestingly finds as almost every occupation has partial automation potential, however it is individual activities rather than entire occupations that will be highly impacted by automation.  Consequently, it concludes that realizing automation’s full potential requires people and technology to work hand in hand.

This interaction however triggers a complex set of legal risks and concern. Ethical issues are raised as well.

The key legal issues to be addressed with some urgency are human physical safety, liability exposure and privacy/data protection.

Ethical concerns cover dignity and autonomy of human beings and include not only the impact of robots on human life, but also, conversely the impact of the ability for a human body to be repaired (such as with bionic limbs and organs), then enhanced, and ultimately created, by robotics and the subtle boundaries that these procedure may push over time.

The current legal frameworks are by definition not wired to address the complex issues raised by AI. The consequence of this is the need to find a balanced regulatory approach to robotics and AI developments that promotes and supports innovation, while at the same time defining boundaries for the protection of individuals and the human community at large.

In this respect, the European Parliament (“EP”) on 31 May 2016 has issued a draft report on civil law rules on robotics. The report outlines the European Parliament’s main framework and vision on the topic of robotics and AI.

While the report is still speculative and philosophical, it is very interesting – especially where it defines AI, and therefore “smart robots” as machines having the following characteristics:

  • The capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data
  • The capacity to learn through experience and interaction
  • The form of the robot’s physical support
  • The capacity to adapt its behaviours and actions to its environment.

The EP’s report also broadly defines six key regulatory themes which are raised by developments in the area of robotics and AI:

  • rules on ethics;
  • rules on liability;
  • connectivity, intellectual property, and flow of data;
  • standardisation, safety and security;
  • education and employment;
  • institutional coordination and oversight.

The report concludes that implications of these technologies are necessarily cross border and it would therefore be a waste of resources and time for each individual country to set out individual rules, recommending a unified EU regulation.

Truly, the implications are cross border and require a collaborative effort, although it is wise to presume that certain countries will be more open minded and flexible than others in defining the limits of AI autonomy, or more restrictive in setting out its boundaries and it might also be inevitable for certain countries to lead the way in regulating AI and robotics.

The policy areas where, according to the EP’s position, action is necessary as a matter of priority include: the automotive sector, healthcare, and drones.

The Liability Issue

The increased autonomy of robots raises first of all questions regarding their legal responsibility. At this time, robots cannot be held liable per se for acts or omissions that cause damage to other parties as they are a machine and therefore liability rests on the owner or, ultimately, producer.

When pointing out the automotive sector as an urgent area needing regulation, the committee was certainly thinking of self-driving cars, which are already being tested in California and driverless cars trial is set for UK motorways in 2019 and government funding has been dedicated to research on autonomous cars. In September 2016, Germany’s transport minister proposed a bill to provide a legal framework for autonomous vehicles which assigns liability on the manufacturer.

However, in a scenario where a robot can take autonomous decisions, ownership / manufacturing traditional liability chain is insufficient to address the complex issue of a robot’s liability (both contractual liability and non-contractual liability), since it would not correctly identify the party which should bear the burden of providing compensation for the damage caused. This civil liability issue is considered “crucial” by the committee.

Data protection, and intellectual property righs

Other key issues in relation to the developments in robotics are the rules on connectivity, and data protection.  While existing laws on privacy, and use of personal data can be applied to robotics in general, practical applications may require further consideration, eg standards for the concepts of “privacy by design” and “privacy by default”, informed consent, and encryption, as well as use of personal data both of humans and of intelligent robots who interact with humans.

Intellectual property rights are also to be considered if one wants to go as far as to accept that there will be at some point a need to protect the “own intellectual creation” of advanced autonomous robots.

Proposals to address these issues have been to assign to the robots an “electronic” personality.

A Proposal

The EP’s report recommends the EU Commission to explore the implications of all possible legal solutions, including that of creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of indemnifying any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.

While this is a good idea, it might take time until it is applicable to all robots as for a robot to have the status of an “electronic person” its autonomous capabilities would need to be particularly enhanced.

Imagining a liability regime where liability would need to be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, taking into account which kind of development the robot has had, which kind of instructions or “education”.

However, it would not be always easy to discern skills resulting from ‘education’ given to a robot from skills depending strictly on its self-learning abilities.  This implies that when trying to identify responsibility, there would be huge grey areas.

A middle-level solution is needed for those situations where a robot is capable of autonomous learning and decisions but apt only to specific uses and not yet sophisticated to the point of being endowed with the status of electronic person, such as might be an autonomous car.

I believe instead that one possible solution to this could be provide each AI a legal personality akin to that currently afforded to corporations.

The benefit of this would be:

– registration/incorporation of the robot

– a head of responsibility, with specific rules and an entity to be considered in terms of liability and insurance

– ability to enter into contracts with each other and with humans with specific responsibilities arising out of the breach of such contracts.

One downside of this is that this type of legal status still requires an owner (a “shareholder”) with limited liability, and this means that the ultimate responsibility, although limited, would not necessarily be placed on the manufacturer, but on the owner, thereby returning to the position of an insufficient protection. However, for example in the case of autonomous cars, the owner of the car could be considered as the holder of the legal entity, with limited liability, having an obligation to ensure the vehicle.

Clearly, the topic still needs to be explored and possible solutions will evolve with time as practical problems arise and AI develops, but I believe that at this time this might be the best solution to put forward to address current concerns related to AI as we know them and understand them.  Ultimately, perhaps, it will be AI itself to propose a solution.

 cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

4 Replies to “Why Artificial Intelligence Will Need a Legal Personality”

Leave a Reply