Why Artificial Intelligence Will Need a Legal Personality

The development of robotics and artificial intelligence (AI) is an exciting, relentless reality which is slowly making its way out of science fiction movies and into our mundane world.

Furthermore, people and technology are increasingly interacting at an individual, daily level.  The increased occasions of interaction between human and AI systems have great potential not only for economic growth but also for individual empowerment, as explained also in the January 2017 McKinsey Global Institute report, which interestingly finds as almost every occupation has partial automation potential, however it is individual activities rather than entire occupations that will be highly impacted by automation.  Consequently, it concludes that realizing automation’s full potential requires people and technology to work hand in hand.

This interaction however triggers a complex set of legal risks and concern. Ethical issues are raised as well.

The key legal issues to be addressed with some urgency are human physical safety, liability exposure and privacy/data protection.

Ethical concerns cover dignity and autonomy of human beings and include not only the impact of robots on human life, but also, conversely the impact of the ability for a human body to be repaired (such as with bionic limbs and organs), then enhanced, and ultimately created, by robotics and the subtle boundaries that these procedure may push over time.

The current legal frameworks are by definition not wired to address the complex issues raised by AI. The consequence of this is the need to find a balanced regulatory approach to robotics and AI developments that promotes and supports innovation, while at the same time defining boundaries for the protection of individuals and the human community at large.

In this respect, the European Parliament (“EP”) on 31 May 2016 has issued a draft report on civil law rules on robotics. The report outlines the European Parliament’s main framework and vision on the topic of robotics and AI.

While the report is still speculative and philosophical, it is very interesting – especially where it defines AI, and therefore “smart robots” as machines having the following characteristics:

  • The capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data
  • The capacity to learn through experience and interaction
  • The form of the robot’s physical support
  • The capacity to adapt its behaviours and actions to its environment.

The EP’s report also broadly defines six key regulatory themes which are raised by developments in the area of robotics and AI:

  • rules on ethics;
  • rules on liability;
  • connectivity, intellectual property, and flow of data;
  • standardisation, safety and security;
  • education and employment;
  • institutional coordination and oversight.

The report concludes that implications of these technologies are necessarily cross border and it would therefore be a waste of resources and time for each individual country to set out individual rules, recommending a unified EU regulation.

Truly, the implications are cross border and require a collaborative effort, although it is wise to presume that certain countries will be more open minded and flexible than others in defining the limits of AI autonomy, or more restrictive in setting out its boundaries and it might also be inevitable for certain countries to lead the way in regulating AI and robotics.

The policy areas where, according to the EP’s position, action is necessary as a matter of priority include: the automotive sector, healthcare, and drones.

The Liability Issue

The increased autonomy of robots raises first of all questions regarding their legal responsibility. At this time, robots cannot be held liable per se for acts or omissions that cause damage to other parties as they are a machine and therefore liability rests on the owner or, ultimately, producer.

When pointing out the automotive sector as an urgent area needing regulation, the committee was certainly thinking of self-driving cars, which are already being tested in California and driverless cars trial is set for UK motorways in 2019 and government funding has been dedicated to research on autonomous cars. In September 2016, Germany’s transport minister proposed a bill to provide a legal framework for autonomous vehicles which assigns liability on the manufacturer.

However, in a scenario where a robot can take autonomous decisions, ownership / manufacturing traditional liability chain is insufficient to address the complex issue of a robot’s liability (both contractual liability and non-contractual liability), since it would not correctly identify the party which should bear the burden of providing compensation for the damage caused. This civil liability issue is considered “crucial” by the committee.

Data protection, and intellectual property righs

Other key issues in relation to the developments in robotics are the rules on connectivity, and data protection.  While existing laws on privacy, and use of personal data can be applied to robotics in general, practical applications may require further consideration, eg standards for the concepts of “privacy by design” and “privacy by default”, informed consent, and encryption, as well as use of personal data both of humans and of intelligent robots who interact with humans.

Intellectual property rights are also to be considered if one wants to go as far as to accept that there will be at some point a need to protect the “own intellectual creation” of advanced autonomous robots.

Proposals to address these issues have been to assign to the robots an “electronic” personality.

A Proposal

The EP’s report recommends the EU Commission to explore the implications of all possible legal solutions, including that of creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of indemnifying any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.

While this is a good idea, it might take time until it is applicable to all robots as for a robot to have the status of an “electronic person” its autonomous capabilities would need to be particularly enhanced.

Imagining a liability regime where liability would need to be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, taking into account which kind of development the robot has had, which kind of instructions or “education”.

However, it would not be always easy to discern skills resulting from ‘education’ given to a robot from skills depending strictly on its self-learning abilities.  This implies that when trying to identify responsibility, there would be huge grey areas.

A middle-level solution is needed for those situations where a robot is capable of autonomous learning and decisions but apt only to specific uses and not yet sophisticated to the point of being endowed with the status of electronic person, such as might be an autonomous car.

I believe instead that one possible solution to this could be provide each AI a legal personality akin to that currently afforded to corporations.

The benefit of this would be:

– registration/incorporation of the robot

– a head of responsibility, with specific rules and an entity to be considered in terms of liability and insurance

– ability to enter into contracts with each other and with humans with specific responsibilities arising out of the breach of such contracts.

One downside of this is that this type of legal status still requires an owner (a “shareholder”) with limited liability, and this means that the ultimate responsibility, although limited, would not necessarily be placed on the manufacturer, but on the owner, thereby returning to the position of an insufficient protection. However, for example in the case of autonomous cars, the owner of the car could be considered as the holder of the legal entity, with limited liability, having an obligation to ensure the vehicle.

Clearly, the topic still needs to be explored and possible solutions will evolve with time as practical problems arise and AI develops, but I believe that at this time this might be the best solution to put forward to address current concerns related to AI as we know them and understand them.  Ultimately, perhaps, it will be AI itself to propose a solution.

[Stefania Lucchetti was also quoted on her views on AI in https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/]



© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Intelligenza Artificiale, Robotica e Personalità Giuridica

[English Version]

Lo sviluppo della robotica e dell’intelligenza artificiale (AI) è una realtà entusiasmante e inarrestabile che sta lentamente facendo il suo corso e spostandosi dal cinema science fiction per trasferirsi nel mondo reale.

Inoltre, gli esseri umani e la tecnologia interagiscono in modo crescente in modo personale e quotidianamente.  Le crescenti occasioni di interazione tra gli esseri umani e i sistemi di intelligenza artificiale hanno enorme potenziale non solo per la crescita economica ma anche per il potenziamento dell’individuo, come ben spiegato nel McKinsey Global Institute report del gennaio 2017, che sottolinea come quasi ogni occupazione ha un parziale potenziale di automazione, e tuttavia saranno singole attività piuttosto che intere tipologie occupazionali ad essere impattate dall’automazione.

Di conseguenza, conclude che la realizzazione del potenziale pieno dell’automazione richiede una collaborazione tra umani e tecnologia.

Questa interazione tuttavia provoca una serie complessa di rischi e preoccupazioni, oltre a inevitabili questioni etiche.

Le questioni giuridiche principali che devono essere affrontate con urgenza sono la sicurezza fisica effettiva degli esseri umani,  l’esposizione a responsabilità e le questioni di privacy e protezione dei dati.

Le preoccupazioni di natura etica riguardano la dignità e autonomia degli esseri umani e includono non solo l’impatto dei robot sulla vita umana, ma anche, parallelamente,, la capacità del corpo umano di essere riparato (per esempio con arti e organi bionici), migliorato e infine creato, dalla robotica – e i sottili confini che queste procedure sorpassano nel tempo.

L’attuale regolamentazione giuridica è per definizione non strutturata per affrontare le questioni complesse sollevate dall’intelligenza artificiale. La conseguenza di questo è la necessità di trovare un approccio regolamentare bilanciato allo sviluppo della robotica e l’intelligenza artificiale che promuova e supporti l’innovazione, e allo stesso tempo definisca i confini per la protezione degli individui e della comunità umana in generale.

Su queste note, il Parlamento Europeo (“PE”) il 31 marzo 2016 ha emesso un progetto di relazione sulle norme di diritto civile applicabili alla robotica. La relazione delinea la visione e generale quadro di riferimento del Parlamento Europeo sul tema della robotica e intelligenza artificiale.

Se da una parte la relazione è ancora speculativa con accenti filosofici, è anche estremamente interessante – specialmente laddove definisce e classifica l’intelligenza artificiale, e quindi i “robot intelligenti” quali aventi le seguenti caratteristiche:

  • La capacità di acquisire autonomia grazie a sensori e/o mediante lo scambio di dati con il proprio ambiente (interconnettività) e l’analisi di tali dati
  • La capacità di apprendimento attraverso l’esperienza e l’interazione;
  • La forma del supporto fisico del robot;
  • La capacità di adeguare il suo comportamento e le sue azioni al proprio ambiente

La relazione del PE inoltre definisce sei principali temi regolatori sollevati dallo sviluppo della robotica e dell’intelligenza artificiale:

  • Principi generali ed etici;
  • Norme in materia di responsabilità;
  • Diritti di proprietà intellettuale, protezione dei dati e proprietà dei dati;
  • Normazione, sicurezza e protezione;
  • Istruzione e occupazione;
  • Coordinamento istituzionale e monitoraggio

La relazione conclude sottolineando che le implicazioni di queste tecnologie sono necessariamente internazionali e quindi se ogni singola nazione definisse delle regole separate questo costituirebbe una perdita di risorse – raccomandando quindi una regolamentazione europea unificata.

Certamente, le implicazioni sono internazionali/cross border e richiedono uno sforzo collaborativo, sebbene sia saggio presumere che alcune giurisdizioni saranno più aperte e flessibili di altre nel definire i limiti dell’autonomia dell’intelligenza artificiale, o più restrittive nel definirne i confini. E’ inoltre inevitabile che alcune nazioni siano alla guida nell’evoluzione della regolamentazione dell’intelligenza artificiale e della robotica.

Le aree nelle quali, secondo la posizione del PE, è necessaria un’azione regolamentare con priorità includono: il settore automotive, il settore medicale, e i droni.

La Questione della Responsabilità

La crescente autonomia dei robot solleva prima di tutto la questione della responsabilità giuridica derivante dall’azione nociva di un robot.  Allo stato delle cose, un robot non può essere considerato responsabile in proprio per atti o omissioni che causano danni a terzi e le norme esistenti in materia di responsabilità coprono i casi in cui la causa di un’azione o di un’omissione di un robot può essere fatta risalire ad uno specifico agente umano ad esempio il fabbricante, l’operatore, il proprietario o l’utilizzatore, o i casi in cui fabbricanti, operatori, proprietari o utilizzatori potrebbero essere considerati oggettivamente responsabili per gli atti o le omissioni di un robot.

In relazione al settore automotive, che viene ritenuta un’area che richiede interventi regolatori con carattere di urgenza,  il tema principale riguarda evidentemente i veicoli auto-guidati, che sono già oggetto di test in California e verranno testati in UK nel 2019 (da notare anche che il ministero dei trasporti tedesco nel settembre 2016 ha proposto una legge per determinare una regolamentazione dei veicoli auto-guidati che alloca la responsabilità sul produttore).

Tuttavia, in uno scenario dove un robot può prendere decisioni autonome, la tradizionale catena di responsabilità basata sulla proprietà o produzione non è sufficiente ad affrontare le complesse problematiche della responsabilità di un robot (sia contrattuale che extra-contrattuale) in quanto i principi esistenti non sarebbero idonei ad identificare correttamente la parte che dovrebbe sostenere l’onere di fornire compensazione per i danni causati. La questione della responsabilità civile è considerata “cruciale” dalla commissione.

Protezione dei dati e diritti di proprietà intellettuale

Altre questioni rilevanti in relazione allo sviluppo della robotica sono le regole sulla connettività. Mentre le leggi esistenti sulla privacy, e l’uso dei dati personali possono essere applicati alla robotica in generale, applicazioni pratiche riguardano ulteriori considerazioni e cioè la regolamentazione di standard per il concetto di “privacy by design”, il consenso informato e il criptaggio, cosi come l’utilizzo di dati personali si di esseri umani che di robot intelligenti che interagiscono con gli esseri umani.

I diritti di proprietà intellettuale inoltre devono essere considerati se si vuole accettare che ad un certo punto vi sarà la necessità di proteggere la “propria creazione intellettuale” di robot avanzati.

Una proposta nel cercare di affrontare queste questioni è stata quella di conferire ai robot “personalità elettronica”.

Una Proposta

La relazione del PE raccomanda che la Commissione UE esplori le implicazioni di tutte le possibili soluzioni legali, inclusa quella di creare uno specifico status legale per i robot, cosicchè almeno ai più sofisticati robot autonomi possa essere conferito lo status di “persona elettronica” con specifici diritti e obblighi, incluso quello di risarcire qualsiasi danno possano aver causato, ed applicare la personalità elettronica ai casi in cui i robot siano in grado di prendere autonome decisioni intelligenti o in ogni caso interagire autonomamente con esseri umani.

Se questa proposta è certamente un’idea valida, potrebbe volerci del tempo prima che sia applicabile a tutti i robot in quanto perchè un robot abbia lo status di “persona elettronica” le sue capacità autonome dovrebbero essere particolarmente pronunciate e avanzate.

Immaginare un regime dove la responsabilità dovrebbe essere proporzionata al reale livello di istruzioni date al robot ed alla sua autonomia, dovrebbe tener conto del fatto che ad una crescente capacità di apprendimento o autonomia del robot, dovrebbe corrispondere una minore responsabilità delle altre parti coinvolte, tenendo conto di quale tipo di sviluppo il robot ha avuto, quale tipo di istruzioni o “educazione”.

Tuttavia, non sarebbe sempre semplice discernere capacità derivanti dall'”educazione” data ad un robot da capacità che dipendano strettamente dalle sue abilità di auto-apprendimento. Questo implica che nel cercare di identificare l’allocazione delle responsabilità, si incapperebbe in enormi aree grigie.

Una via di mezzo è quindi necessaria per quei casi in cui il robot è capace di apprendimento e decisioni autonome ma adatto solo a specifici utilizzi e non ancora sofisticato al punto da essere dotato dello status di “persona elettronica”, come per esempio un veicolo autonomo.

Ritengo invece che una possibile soluzione a questo sia di attribuire a ciascuna intelligenza artificiale una personalità giuridica assimilabile a quella attribuita alle persone giuridiche.

I benefici sarebbero:

– registrazione/incorporazione del robot

– un’entità giuridica a cui attribuire la responsabilità, con regole specifiche e la possibilità di contrarre una copertura assicurativa

– la capacità di stipulare contratti reciprocamente e con gli esseri umani dai quali deriverebbero specifiche responsabilità e conseguenze per la violazione degli obblighi in essi previsti.

Un lato negativo di questa proposta è che questo tipo di status legale richiede un proprietario (un “azionista”) con responsabilità limitata, e questo significa che la responsabilità finale, sebbene limitata, non sarebbe necessariamente posta sul produttore ma sul proprietario, ritornando alla posizione di una protezione insufficiente.

Tuttavia, per esempio nel caso dei veicoli autoguidati, il proprietario dell’auto potrebbe essere considerato il proprietario dell’”entità giuridica” veicolo autonomo: in quanto tale avrebbe responsabilità limitata, e avrebbe l’obbligo di assicurare il veicolo.

Un altro tema riguarda la possibile difficoltà in taluni casi di “delimitare” fisicamente l’abito fisico di una specifica intelligenza artificiale.

Chiaramente, la questione deve essere ancora studiata e ponderata e possibili soluzioni evolveranno parallelamente allo sviluppo dell’intelligenza artificiale, credo però che in questo momento storico questa possa essere una valida soluzione da proporre per affrontare le attuali problematiche dell’intelligenza artificiale per come la conosciamo e comprendiamo ad oggi. Senza escludere che, forse fra qualche anno, non possa essere la stessa intelligenza artificiale a proporre le migliori soluzioni.

[Stefania Lucchetti was also quoted on her views on AI in https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/]



© Stefania Lucchetti 2017.  Per ulteriori informazioni contattare l’autrice

Questo articolo può essere condiviso o riprodotto solo nella sua interezza e con credito e citazione dell’autrice.

Cross Border M&A – when to opt for a minority stake in a cross border joint venture

Whether your company has engaged in successful joint venture activities for years, or it is new to joint ventures, there is always an element of uncertainty when deciding to enter into a cross border joint venture, whether the objective is to expand reach and distribution of the company’s products and services in highly developed countries, or in emerging markets.

Whatever the key strategic purpose your company wants to achieve, there are two options for entering into a joint venture – which create different outcomes and specific governance issues. Whether the joint venture is established through an acquisition of an existing company or the set up of a joint venture vehicle, your company may either opt for a majority stake or a minority stake.

The decision to opt for a minority stake may be driven by various factors, including the power relationship with the JV partner.

In emerging markets, this choice is often driven by two key considerations:

Regulatory Constraints – Regulatory constraints in specific markets may cause the foreign investment to be restricted to minority investment levels.

Commercial Credibility – Accepting a minority stake may also reflect the need and strategy to enter the market with a credible local JV partner which has already established scale and reputation. This brings along an advantage where the JV partner is operating solidly and effectively in the emerging market environment, and has established government and public policy relations.

Aside from the above, this strategy may be useful or necessary for pure strategic purposes, where the JV partner has the commercial lead in the JV for example because it has proprietary technology, key products or client base/distribution platform which your company heavily relies on.

When deciding to enter into a JV where your company will hold a minority stake, the key point to be considered is that this structure requires a greater preparedness for your company to rely more heavily on the JV partner’s capacity to lead the joint venture and achieve common objectives.

In this circumstance, one of the key issues to be addressed is the establishment of minority protections both at the shareholders meeting and board level. This needs to be done by carefully negotiating and drafting a shareholders’ agreement and ancillary documents which include such protections, in order to achieve a governance structure that balances the powers of the JV partners to achieve the desired objectives.

 cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Augmented Reality Mirrors – Fashion Meets Digital and Privacy Concerns

If, like me, you have switched to e-commerce because you hate the experience of trying clothes on fitting rooms (as in fact do 46% of customers according to a survey conducted by Body Labs in 2016) but end up sending back half of your purchases because they don’t fit or look and feel different than what you expected by seeing them 2D on screen only, or you are a retailer trying to increase sales (apparently, shoppers who do use a fitting room are much more likely to make a purchase – see study by retail analytics company Alert Tech) you may be thrilled by the new trend in digital revolution for retail: digital mirrors.

We have already seen them in some fashionable Milan stores, although at this time they are more focused on infotainment and not yet as advanced as they could be, or purport to be.

Retailers already know the benefit of offering interactive, personalised in store experience – a customer is much more likely to walk out with a purchase if s/he receives personalized advice.
Digital mirrors may provide an innovative and efficient method of reinventing the fitting room experience by offering 360-degree views of outfits; touchscreen technology to browse other colours, sizes and suggested items that can be put together to create an entire outfit.

It won’t be long before the technology will offer personalised compliments and changing lighting conditions to make clothes look better.

Of course, there is a catch to digital mirrors in that while they can also provide useful information to the shop about the user experience, including which items are brought into the changing room, which items the shopper decides to buy out of the ones s/he has selected, etc, they render the changing room experience no longer private. E-commerce has long ago chipped into our private experience of shopping (be as it may, our shopping history on amazon or any other e-commerce platform is recorded), now the virtual changing room experience will remove another layer of privacy.

Is it worth it? It depends as always on the personal boundaries of each individual and the perceived benefit of digital shopping against private changing room.  For a number of shops which have already implemented augmented reality mirrors, one of the benefits for the shopper is not having to undress to try on certain garments, or explore new colors. It may not be long however until the virtual changing room will start marketing additional services to the shopper, such as a personalised diet plan and other similar suggestions.

Ultimately, the key concerns relate to privacy and data protection and the expanding reach of profiling and data recording on the single user’s preferences.  Stores will have to find a balance between user-experience, sale data and compliance with privacy laws. Ultimately, this will create a further segmentation in the market as mature shoppers will prefer more intimate, private changing room experiences, while young shoppers will probably flock into shops that feature a more public type of augmented reality mirrors (and will not be able to resist sharing the experience).

Luckily for European consumers, art. 17 of the new GDPR (Regulation (EU) 2016/679 – adopted on 8 April 2016, taking effect on 25 May 2018) includes a right to erasure (right to be forgotten) and art. 21 (Right to Object) may come useful.  These new provisions, which were adopted following the CJEU decision in the Google vs Spain case, allow individuals to require the data controller to erase their personal data without undue delay subject to certain conditions, eg where no other legal ground for processing applies.

This will however be often difficult to manage in practice as it requires the controller to inform third parties to which the data has already been disclosed that the data subject has requested erasure of link or copies of that data.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Cybersecurity and board responsibilities

The “Wannacry” ransomware attack that disrupted businesses around the world on 12 May has led to the need to consider more carefully the impact of a cyberattack and its implications not only on the protection of consumer data, but also on the company’s financial and sensitive data.

A cyberattack can not only cause the loss of a company’s consumer data, it can also expose confidential information relating to a company, such as ongoing regulatory investigations, or it may cause the loss of intellectual property other than of consumer data.  Financial risks as well as reputational risks are at stake for a company.

Boards are therefore increasingly coming to the realization that a data leek due to cybercrime is a serious risk management issue.

This is a challenge as while most directors are somewhat informed about cybersecurity, it is often very difficult for them to stay updated with the latest information, and especially to deploy sufficient investments to protect the company from ever changing cyber risk. Also, cybersecurity has in most companies been delegated to an IT manager with no sufficient budget or decision making power.

Accepting that this is a key enterprise risk which needs to be addressed at a board level and not just at an IT management level is an essential switch that boards need to make.

The key reason is that a lack of proper action may lead to board responsibilities towards the company (ie under Art. 2392 of the Italian Civil Code for example for lack of appropriate action to protect the company).

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Thoughts – Are They No Longer Private Experiences?

The essence of who we are – we may refer to it with the overused term “consciousness” perhaps – or at the very least our persona and personality, are created through and contained in our thoughts and experiences. Thoughts and experiences which for the most part, until recent years, were private or shared with few select individuals. Most of us remember writing secret diaries while growing up.  Experiences were shared through private conversations, and for the most adventurous and articulate, books.  Now thoughts and experiences seem to be no longer relevant unless they are shared with the world, through blogs, through Facebook.

One step further, commercial profiling of our purchasing preferences is making even our intentions, preferences and objectives traceable.

The next step is going to be even more intrusive.  Brain scanners, it appears, are evolving and becoming more adept at being sold as consumer devices.  Consumer type scanners would enable all of us to display our own thoughts and access those of others who do the same.  Our thoughts may become visible, downloadable and open to the world.

New privacy issues need to be considered, as well as perhaps a new definition of boundaries as to where and how we will be able to retain some thoughts and experiences private.

See interesting info at: https://www.edge.org/annual-question/2016/response/26632

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be be shared and/or reproduced only in their entirety and with full credit/citation. 

IoT Enabled Contracts – Dynamic Contracts Are On The Way – What Will Lawyers Do?

As a transactional lawyer, I have countless times assisted clients in renegotiating and drafting amendments, side letters and updates to signed contracts to address issues that come up from time to time during the contractual relationship. Let’s face it: contracts are a static, binding photograph of a situation, and their purpose is to address issues that may arise in the future so that in case a dispute arises the parties (or a judge) may refer to the parties’ original intention.

However a binding, static contract often fails to address what is likely to be the key issue between the parties: unless the contract covers a one shot transaction (such as the sale of a good) – the contractual relationship between the parties is by nature a dynamic, evolving relationship which – whether it is related to the provision of services or a corporate joint venture – will require innumerable compromises, changes and adaptations to the original ideas.

Good lawyers with sound experience know this and to the extent possible draft contracts that reflect principles which can be applied to an evolving relationship, however this may not be sufficient.  Often problems arise which had not or could not have been contemplated at the outset of the relationship and the key principles set out in the contract are not sufficient to address them. A renegotiation and amendment to the contract is necessary.

This however means that new paper will be produced and ultimately it will be difficult to make sense of the history of the relationship.

IoT enabled contracts which allow for a dynamic relationship may just be what companies need.  This type of contract would respond to external information fed into it and evolve with the parties’ relationship.  While this will have huge benefits in tracking the progress of ongoing relationships, the risk is of course that of making the original agreement between the parties useless.

As always, however, law and the practice of law will need to adapt to the needs of the market, and this might create the need for lawyers to evolve as well a new way of drafting contracts – with clauses, formulas and principles designed to work in an adaptive relationship.

See interesting information at: https://www.artificiallawyer.com/2017/05/08/guest-post-the-contract-stack-revolution-begins/

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be be shared and/or reproduced only in their entirety and with full credit/citation. 

The Negotiation Factor

Several years ago, based on my experience of many years as a scholar first and corporate lawyer after, I wrote a book (“The Principle of Relevance“) which discussed the implications of information overload, the idea that the key differentiating factor was the ability to process large quantities of information, and how to switch from a linear processing model to a multi level processing model. Many things have come to pass since then, and my interest with information has since then evolved into something different.

Also through the process of writing about information processing, and the many engagements that followed, I have come to the realization that our world is a shifting one. It is a time where information, technology and ideas are so freely and easily available, that access to information – and perhaps also the ability to process that information – while it may be the key challenge for companies – met now by “big data” solutions – is no longer the key factor for individuals.

The key differentiating factor is, truly, the human factor. The ability to engage in meaningful, productive, lasting relationships and partnerships. The ability to engage in a circle of commitment while yet keep engaging with the world. The ability to understand another individual’s background, interests and worries. And, ultimately, the ability to negotiate for change, and for a reciprocal meeting of interests.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Negotiation Advice: Time is On Your Side

Negotiation tip of the day: when negotiating a difficult matter or trying to get out of a difficult situation being in a hurry to close the deal is very rarely on your side. If you want to achieve a better outcome, try to find a way to buy yourself time first.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be be shared and/or reproduced only in their entirety and with full credit/citation.