Africa’s Digital Acceleration

The first time I visited Kenya and Tanzania it was at the beginning of year 2008 and I was surprised to see how countries with no phone lines had bypassed the need for infrastructure with mobile connection which was at times more widespread than in rural areas of Asia or even Europe.

Several years went by and in my second trip to Kenya (and first of many) in 2016 I discovered a country that rather than only accept deployment of existing technologies from US or Europe is also striving to be at the forefront of increasing innovation. Take M-Pesa (M – of course – for mobile, pesa meaning money in Swahili).  M-Pesa is a mobile phone – based money exchange platform, launched at the end of last decade by Safaricom and Vodacom in Kenya and Tanzania.  The service allows users to deposit money into an account stored on mobile phones, and transfer money on a direct peer-to-peer exchange through text messages, completely doing away with bank services or other intermediaries.

Mobile-based payment is a technology that has not quite taken off in Europe, US or Asia – though Apple pay is trying to push it through, with limited success – but has been transformative for microeconomy in Kenya, where there is still a large slice of population which does not have a bank account (and I might add – for those who do – ATM machines do not work as often as one might expect or hope for).

Since then, M-Pesa has spread wings also in a number of other countries in Africa, Middle East and Eastern Europe.  In the meantime, the appetite for digital services has grown. In May this year, Telkom Kenya launched a – very appreciated by its users – free whatsapp service.  The move could steer users from Telkom Kenya’s competitors, while at the same time allowing time for digital services powered by mobile apps (such as Amazon and Amazon Prime – which are not yet available in Kenya but hopefully soon will) to grow and spur data use.

Other African countries have joined the bubbling pot: in late 2016 Senegal announced the launch of eCFA Franc: a digital currency. It is not yet clear what kind of success eCFA Franc is having, but the idea and potential of it are certainly a promising start.

Education is not lagging behind: in April this year for example the Rwanda government announced a partnership with Microsoft with the intention to digitise Rwanda education through a “smart-classroom” project.  Nairobi schools offer very advanced programs fully integrating solid academics with access to technology.

What is the source of the wind behind Africa’s innovation streak?

Most African countries are developing economies with great potential for growth. In the past decades progress was halted for lack of  traditional old economy infrastructure (eg roads and landlines). However by doing away with the need to invest capitals and work in building infrastructure, and jumping straight to digital, there is now space to experiment with new technology without being weighted down by bottlenecks of existing infrastructure and its regulatory constraints.

Disintermediation is also the key to new possibilities. Where intermediation finds its bottle neck in bureaucracy and corruption, the possibilities offered by peer to peer technology (such as M-Pesa) and disintermediating technology such as blockchain are infinite.

Urban planning is another area where innovation would be of great benefit to Africa. Take the city of Nairobi, where a very modern lifestyle battles with the absence of modern and well connected roads while the Jomo Kenyatta International Airport is better connected than our (sadly) decreasingly connected Milan airports and the small and surprisingly efficient Wilson airport serves flights to a number of regional touristic and business destinations. A futuristic approach might do away with the need for roads and jump straight to drone transportation for logistics or even – with a further technological acceleration – experimenting with flying cars.

It is to be expected that a further acceleration in innovation will appear when entrepreneurs gain wider access to financing via venture capital funds or also non traditional means, such as crowdfunding.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017. For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 


Do the risks of AI derive from AI being more human than we think?

On 12 June, Magic Circle firm Slaughter and May jointly with ASI Data Science published a white paper (the “Paper”) titled Superhuman Resources – Responsible Deployment of AI in Business , dedicated to exploring the benefits, risks and potential vulnerabilities of AI.

The Paper is well written, well researched and very interesting and I strongly recommend to read it.

It offers some elements of history of AI and a clear and simple definition of much used (and abused) terms such as Static v dynamic machine learning, General v narrow AI, Supervised v unsupervised machine learning.  It defines AI as a “system that can simulate human cognitive processes” and also goes as far as to define the human brain as a “computer on a biological substrate”.

The Paper then goes on to identify the already well known potentials of AI – ability to synthesize large volumes of data, adaptability and scalability, autonomy, consistency and reliability, as well as the less known or acknowledged potential for creativity.  Machines have in fact in recent years shown a surprising ability to produce creative works in the area of art, writing and design. News writing bots have already made their appearance in news articles such as Washington Post’s AI journalist Heliograf and other works of AI produced visual and expressive art have been widely published.

The key message of the Paper however is that while commentary on and investments in AI and machine learning seem so far to have focused on the potential upsides, a critical point (according to the authors) is missing: that AI can “only ever be useful if it can be deployed responsibly and safely”. The Paper then goes on to identify, and analyse in some depth, 6 categories of risk, 1. Failure to perform, 2. Social Disruption; 3. Privacy; 4. Discrimination (ie data bias); 5. Vulnerability to misuse; and 6. Malicious Re-Purposing.

Although the analysis is excellent, I do not agree with the Paper’s claim that the risks of AI have so far been ignored. The risks of AI have on the contrary often been magnified, if not only in innumerable well constructed science fiction movies, and regulators are not oblivious to AI. As already mentioned in a recent post, the EU Parliament has already set out, albeit not with such rigorous scientific approach and structure, a number of legal issues which AI raises and will raise and that need to be address for reliability and safety (see my earlier post Why Robots Need a Legal Personality ).

What struck me most from the Paper however was however that in providing a structured and detailed list of potential risks and pitfalls of AI, it also highlights that the key issue underlying them all derives from the fact that automated processes, as the report itself says by quoting Ian Bogost [2015, The Cathedral of Computation] “carry an aura of objectivity and infallibility” – yet AI is not infallible. Specifically it isn’t if : the system fails (Failure to Perform); it handles data disregarding privacy laws; it makes decisions having been exposed to data which is limited or biased. AI also can create- like any innovation – social disruption, it could be manipulated to be used [by humans] for mean purposes, or it could be maliciously re-purposed in the wrong hands.

So we learn that AI is subject to system errors or application errors, may make wrong judgments if it is only exposed to limited data, and may be subject to manipulation.

In essence, the key risks of AI derive from the fact that AI behaves more like a human than humans would think or hope for!  But then, if AI is a “system that can simulate human cognitive processes” wouldn’t fallibility be an intrinsic characteristic of AI, only perhaps with a degree of fallibility lower than that of humans?

The Paper recommends that businesses “be forward thinking and responsible” in designing and deploying AI which by design has systems that mitigate or restrict potential negative effects, and in particular to set out procedures, such as risk assessment and risk register, monitoring and alerting, audit systems to determine causal processes and accountability frameworks for algorithms.

This is all very valuable and very important. I still believe however that it misses the key point, exciting and risky at the same time, and that is that by allowing a machine to develop cognitive processes (and I deliberately use the word “develop” as I am not sure if “simulate” would correctly represent machine learning or creative expression) a new intelligent entity is created, which at some point will, or might, develop to the point of not being limited to being an instrument to be used at the hands of humans in accordance to human instructions, but will, or at least might, be self directed.  As I have already pointed out in my other earlier post AI legal issues, one of the key issues relating to the next AI generation is that they will have the ability not just to operate on big data based on algorithms designed and built in by humans, but to create their own algorithms.

And this brings me back to my original point, which is that one of the key legal issues to be addressed is to determine in which cases and at which point of development an AI needs to be provided with a legal personality.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 



Big Data and the Music Paradox

It is recent news that Spotify has settled a complex licensing dispute involving mechanical licensing rights, allegedly in order to clean up its affairs in view of a prospective IPO.

Mechanical rights under US law must be obtained (in addition to performance rights) when reproducing a piece of music onto a physical or digital support.  It is not clear if Spotify willingly avoided to pay mechanical rights or – as it claimed – it had no availability of the data necessary to sort out which publishers had legitimate claims over songs (there isn’t a central and reliable database covering all music rights to all songs).

Music licensing is a complex and disarticulated. Different countries have different nuances of copyright applicable to music (and different ways and means of collecting royalties). In some countries, particularly the US, the music publishing sector has traditionally licensed the performing rights and mechanical rights separately through different entities. This means that music distributors need to have license covering both the song and the recording, and both performing and mechanical rights. In the US this issue is partially addressed through a compulsory licence covering mechanical rights with a pre-set statutory rate to be paid, so streaming services are not required to negotiate terms and price with each right holder. However, often the owner can’t be identified, as while there are collecting societies that licence performing rights, mechanical rights are not represented by a single society nor is there a single publicly accessible database providing this information, therefore – according to Spotify – making it impossible (or too burdensome) for a streaming music provider to comply with mechanical rights obligations for all songs.

Whatever the reasons for Spotify’s legal lapse, certainly, it is a fact that digital distribution of music and particularly streaming needs to take a further leap forward in its ongoing legal catch-me-if-you-can race which has been going on for the past 20 years – since the time of 1999 Napster.

I have been a late adopter of Spotify but their theme-based compilations, and especially their running compilations which select songs matching your personal running beat were recently a revolutionary discovery for me.  As a lawyer and a mom of two boys, time for listening to music – or especially time for discovering new music and updating my playlists has been one of the first to disappear on my schedule, with the effect that music slowly started disappearing from my life. Yet, discovering new music and enjoying music had always been one the most fundamental and joyous artistic experiences for me.

Then I discovered Spotify. Just this morning, while I was running to one of Spotify’s compilation which offers songs matching the user’s running beat, I listened to about 15 songs that I had never heard of, of artists I have never heard of.  This certainly was not possible in pre-digital ages, where buying a tape or a CD was so expensive that you would listen to the same music or playlist over and over again for months on end until a boyfriend/girlfriend would introduce you to some new playlist of his/hers by copying it on tape or CD. And you would listen to that for months on end.  But it wasn’t so even in the iTunes years – iTunes made buying music affordable, but in order to listen to a song a user still had to know it, select it, download it. The only way to discover new music was the radio, with advertising and limited availability of choice and customisation as songs were selected by a human mind – the radio host.

With services like Spotify, music enters the realm of big data and a seemingly infinite number of music pieces are available and playlists for all tastes, moods, desires and functional needs are created by algorithmic configurations.  This is a new radical change in the dynamic of the music industry – particularly the relationship between listeners and music is revolutionized.  The magic of data-driven approach applied to streaming is that music is available in such quantity and variety, that paradoxically the relationship between user and music is disinter-mediated and direct because the algorithmic data stream allows for more nuanced experiences and choice.

The consequence of this is that an average user can access music and artists that s/he would perhaps have never considered before – and at the same time artists that would have had no clout are heard of by a greater audience. This certainly is a boost for the music industry, and for any single musician who wishes to expand the reach of her/his music.  At the same time, disintermediation necessarily bypasses those structures that had been put in place in previous eras to protect legal interests.

It is certainly necessary to create new structures where music can be discovered without getting caught up in legal tangles, while at the same time compensating an artist for beautiful music.

As data driven services emerge in the music industry, a data driven approach needs to be adopted by music societies as well. I am imagining a universal music society with a database to which artists and music labels sign up and songs and recordings are matched by a Shazam-type service (with conflicts resolved through an online dispute resolution service).  All streaming and downloading services would link to this database and payment to the relevant right-holder would be automatic and immediate. I go further by imagining different levels of payments, where for example new songs by unknown artists are remunerated based on a rating level by listeners, so that copyright compliance can be a boost to music discovery rather than a gateway to its distribution.


© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

AI legal issues

Al WiredNextFest di questi giorni si è accennato alle tematiche giuridiche sollevate dall’evoluzione del AI.
E’ mancato però uno spunto importante: i problemi giuridici che riguardano la prossima generazione di AI, in particolare per quanto riguarda la responsabilità, derivano dalla capacità di un AI di create i propri algoritmi e non solo di operare sulla base di quelli impostati dal costruttore. 

At WiredNextFest held in Milan these past few days there were conversations about legal issues deriving from the development of AI.  There was a key point missing however: one of the key issues relating to the next AI generation is that they will have the ability not just to operate on big data based on built in algorithms, but to create their own algorithms

A proposal for a Family Business Corporate Governance Code

On 23 May at Bocconi University’s campus AIdAF-EY and Bocconi University presented their proposal for Family Business Voluntary Corporate Governance Code.

The Code was prepared by prof. Alessandro Minichilli and prof. Maria Lucia Passador.

Adherence to the Code would be voluntary. Its purpose is to create a reliable governance structure for non listed family businesses.  In recent years, several European countries, such as Belgium, Spain and Finland have promoted corporate governance codes for non listed companies.  IFC (World Bank Group) in 2011 published a Family Business Governance Handbook.

The benefits of adopting a reliable and transparent corporate governance structure are numerous for a non listed company.  A corporate governance structure not only protects the company’s business and assets, but it also makes the company more reliable to third party business partners and potential investors, especially international businesses.  It also attracts outside talent – as outside managers are often reluctant to join a close knit family business.

This is very good news especially in Italy, where family businesses contribute 94% of the national GDP (source: FFI Datapoints), however they often struggle on succession planning and expansion, which transparency and clear rules.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Why Artificial Intelligence Will Need a Legal Personality

The development of robotics and artificial intelligence (AI) is an exciting, relentless reality which is slowly making its way out of science fiction movies and into our mundane world.

Furthermore, people and technology are increasingly interacting at an individual, daily level.  The increased occasions of interaction between human and AI systems have great potential not only for economic growth but also for individual empowerment, as explained also in the January 2017 McKinsey Global Institute report, which interestingly finds as almost every occupation has partial automation potential, however it is individual activities rather than entire occupations that will be highly impacted by automation.  Consequently, it concludes that realizing automation’s full potential requires people and technology to work hand in hand.

This interaction however triggers a complex set of legal risks and concern. Ethical issues are raised as well.

The key legal issues to be addressed with some urgency are human physical safety, liability exposure and privacy/data protection.

Ethical concerns cover dignity and autonomy of human beings and include not only the impact of robots on human life, but also, conversely the impact of the ability for a human body to be repaired (such as with bionic limbs and organs), then enhanced, and ultimately created, by robotics and the subtle boundaries that these procedure may push over time.

The current legal frameworks are by definition not wired to address the complex issues raised by AI. The consequence of this is the need to find a balanced regulatory approach to robotics and AI developments that promotes and supports innovation, while at the same time defining boundaries for the protection of individuals and the human community at large.

In this respect, the European Parliament (“EP”) on 31 May 2016 has issued a draft report on civil law rules on robotics. The report outlines the European Parliament’s main framework and vision on the topic of robotics and AI.

While the report is still speculative and philosophical, it is very interesting – especially where it defines AI, and therefore “smart robots” as machines having the following characteristics:

  • The capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data
  • The capacity to learn through experience and interaction
  • The form of the robot’s physical support
  • The capacity to adapt its behaviours and actions to its environment.

The EP’s report also broadly defines six key regulatory themes which are raised by developments in the area of robotics and AI:

  • rules on ethics;
  • rules on liability;
  • connectivity, intellectual property, and flow of data;
  • standardisation, safety and security;
  • education and employment;
  • institutional coordination and oversight.

The report concludes that implications of these technologies are necessarily cross border and it would therefore be a waste of resources and time for each individual country to set out individual rules, recommending a unified EU regulation.

Truly, the implications are cross border and require a collaborative effort, although it is wise to presume that certain countries will be more open minded and flexible than others in defining the limits of AI autonomy, or more restrictive in setting out its boundaries and it might also be inevitable for certain countries to lead the way in regulating AI and robotics.

The policy areas where, according to the EP’s position, action is necessary as a matter of priority include: the automotive sector, healthcare, and drones.

The Liability Issue

The increased autonomy of robots raises first of all questions regarding their legal responsibility. At this time, robots cannot be held liable per se for acts or omissions that cause damage to other parties as they are a machine and therefore liability rests on the owner or, ultimately, producer.

When pointing out the automotive sector as an urgent area needing regulation, the committee was certainly thinking of self-driving cars, which are already being tested in California and driverless cars trial is set for UK motorways in 2019 and government funding has been dedicated to research on autonomous cars. In September 2016, Germany’s transport minister proposed a bill to provide a legal framework for autonomous vehicles which assigns liability on the manufacturer.

However, in a scenario where a robot can take autonomous decisions, ownership / manufacturing traditional liability chain is insufficient to address the complex issue of a robot’s liability (both contractual liability and non-contractual liability), since it would not correctly identify the party which should bear the burden of providing compensation for the damage caused. This civil liability issue is considered “crucial” by the committee.

Data protection, and intellectual property righs

Other key issues in relation to the developments in robotics are the rules on connectivity, and data protection.  While existing laws on privacy, and use of personal data can be applied to robotics in general, practical applications may require further consideration, eg standards for the concepts of “privacy by design” and “privacy by default”, informed consent, and encryption, as well as use of personal data both of humans and of intelligent robots who interact with humans.

Intellectual property rights are also to be considered if one wants to go as far as to accept that there will be at some point a need to protect the “own intellectual creation” of advanced autonomous robots.

Proposals to address these issues have been to assign to the robots an “electronic” personality.

A Proposal

The EP’s report recommends the EU Commission to explore the implications of all possible legal solutions, including that of creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of indemnifying any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.

While this is a good idea, it might take time until it is applicable to all robots as for a robot to have the status of an “electronic person” its autonomous capabilities would need to be particularly enhanced.

Imagining a liability regime where liability would need to be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, taking into account which kind of development the robot has had, which kind of instructions or “education”.

However, it would not be always easy to discern skills resulting from ‘education’ given to a robot from skills depending strictly on its self-learning abilities.  This implies that when trying to identify responsibility, there would be huge grey areas.

A middle-level solution is needed for those situations where a robot is capable of autonomous learning and decisions but apt only to specific uses and not yet sophisticated to the point of being endowed with the status of electronic person, such as might be an autonomous car.

I believe instead that one possible solution to this could be provide each AI a legal personality akin to that currently afforded to corporations.

The benefit of this would be:

– registration/incorporation of the robot

– a head of responsibility, with specific rules and an entity to be considered in terms of liability and insurance

– ability to enter into contracts with each other and with humans with specific responsibilities arising out of the breach of such contracts.

One downside of this is that this type of legal status still requires an owner (a “shareholder”) with limited liability, and this means that the ultimate responsibility, although limited, would not necessarily be placed on the manufacturer, but on the owner, thereby returning to the position of an insufficient protection. However, for example in the case of autonomous cars, the owner of the car could be considered as the holder of the legal entity, with limited liability, having an obligation to ensure the vehicle.

Clearly, the topic still needs to be explored and possible solutions will evolve with time as practical problems arise and AI develops, but I believe that at this time this might be the best solution to put forward to address current concerns related to AI as we know them and understand them.  Ultimately, perhaps, it will be AI itself to propose a solution.

[Stefania Lucchetti was also quoted on her views on AI in]



© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Intelligenza Artificiale, Robotica e Personalità Giuridica

[English Version]

Lo sviluppo della robotica e dell’intelligenza artificiale (AI) è una realtà entusiasmante e inarrestabile che sta lentamente facendo il suo corso e spostandosi dal cinema science fiction per trasferirsi nel mondo reale.

Inoltre, gli esseri umani e la tecnologia interagiscono in modo crescente in modo personale e quotidianamente.  Le crescenti occasioni di interazione tra gli esseri umani e i sistemi di intelligenza artificiale hanno enorme potenziale non solo per la crescita economica ma anche per il potenziamento dell’individuo, come ben spiegato nel McKinsey Global Institute report del gennaio 2017, che sottolinea come quasi ogni occupazione ha un parziale potenziale di automazione, e tuttavia saranno singole attività piuttosto che intere tipologie occupazionali ad essere impattate dall’automazione.

Di conseguenza, conclude che la realizzazione del potenziale pieno dell’automazione richiede una collaborazione tra umani e tecnologia.

Questa interazione tuttavia provoca una serie complessa di rischi e preoccupazioni, oltre a inevitabili questioni etiche.

Le questioni giuridiche principali che devono essere affrontate con urgenza sono la sicurezza fisica effettiva degli esseri umani,  l’esposizione a responsabilità e le questioni di privacy e protezione dei dati.

Le preoccupazioni di natura etica riguardano la dignità e autonomia degli esseri umani e includono non solo l’impatto dei robot sulla vita umana, ma anche, parallelamente,, la capacità del corpo umano di essere riparato (per esempio con arti e organi bionici), migliorato e infine creato, dalla robotica – e i sottili confini che queste procedure sorpassano nel tempo.

L’attuale regolamentazione giuridica è per definizione non strutturata per affrontare le questioni complesse sollevate dall’intelligenza artificiale. La conseguenza di questo è la necessità di trovare un approccio regolamentare bilanciato allo sviluppo della robotica e l’intelligenza artificiale che promuova e supporti l’innovazione, e allo stesso tempo definisca i confini per la protezione degli individui e della comunità umana in generale.

Su queste note, il Parlamento Europeo (“PE”) il 31 marzo 2016 ha emesso un progetto di relazione sulle norme di diritto civile applicabili alla robotica. La relazione delinea la visione e generale quadro di riferimento del Parlamento Europeo sul tema della robotica e intelligenza artificiale.

Se da una parte la relazione è ancora speculativa con accenti filosofici, è anche estremamente interessante – specialmente laddove definisce e classifica l’intelligenza artificiale, e quindi i “robot intelligenti” quali aventi le seguenti caratteristiche:

  • La capacità di acquisire autonomia grazie a sensori e/o mediante lo scambio di dati con il proprio ambiente (interconnettività) e l’analisi di tali dati
  • La capacità di apprendimento attraverso l’esperienza e l’interazione;
  • La forma del supporto fisico del robot;
  • La capacità di adeguare il suo comportamento e le sue azioni al proprio ambiente

La relazione del PE inoltre definisce sei principali temi regolatori sollevati dallo sviluppo della robotica e dell’intelligenza artificiale:

  • Principi generali ed etici;
  • Norme in materia di responsabilità;
  • Diritti di proprietà intellettuale, protezione dei dati e proprietà dei dati;
  • Normazione, sicurezza e protezione;
  • Istruzione e occupazione;
  • Coordinamento istituzionale e monitoraggio

La relazione conclude sottolineando che le implicazioni di queste tecnologie sono necessariamente internazionali e quindi se ogni singola nazione definisse delle regole separate questo costituirebbe una perdita di risorse – raccomandando quindi una regolamentazione europea unificata.

Certamente, le implicazioni sono internazionali/cross border e richiedono uno sforzo collaborativo, sebbene sia saggio presumere che alcune giurisdizioni saranno più aperte e flessibili di altre nel definire i limiti dell’autonomia dell’intelligenza artificiale, o più restrittive nel definirne i confini. E’ inoltre inevitabile che alcune nazioni siano alla guida nell’evoluzione della regolamentazione dell’intelligenza artificiale e della robotica.

Le aree nelle quali, secondo la posizione del PE, è necessaria un’azione regolamentare con priorità includono: il settore automotive, il settore medicale, e i droni.

La Questione della Responsabilità

La crescente autonomia dei robot solleva prima di tutto la questione della responsabilità giuridica derivante dall’azione nociva di un robot.  Allo stato delle cose, un robot non può essere considerato responsabile in proprio per atti o omissioni che causano danni a terzi e le norme esistenti in materia di responsabilità coprono i casi in cui la causa di un’azione o di un’omissione di un robot può essere fatta risalire ad uno specifico agente umano ad esempio il fabbricante, l’operatore, il proprietario o l’utilizzatore, o i casi in cui fabbricanti, operatori, proprietari o utilizzatori potrebbero essere considerati oggettivamente responsabili per gli atti o le omissioni di un robot.

In relazione al settore automotive, che viene ritenuta un’area che richiede interventi regolatori con carattere di urgenza,  il tema principale riguarda evidentemente i veicoli auto-guidati, che sono già oggetto di test in California e verranno testati in UK nel 2019 (da notare anche che il ministero dei trasporti tedesco nel settembre 2016 ha proposto una legge per determinare una regolamentazione dei veicoli auto-guidati che alloca la responsabilità sul produttore).

Tuttavia, in uno scenario dove un robot può prendere decisioni autonome, la tradizionale catena di responsabilità basata sulla proprietà o produzione non è sufficiente ad affrontare le complesse problematiche della responsabilità di un robot (sia contrattuale che extra-contrattuale) in quanto i principi esistenti non sarebbero idonei ad identificare correttamente la parte che dovrebbe sostenere l’onere di fornire compensazione per i danni causati. La questione della responsabilità civile è considerata “cruciale” dalla commissione.

Protezione dei dati e diritti di proprietà intellettuale

Altre questioni rilevanti in relazione allo sviluppo della robotica sono le regole sulla connettività. Mentre le leggi esistenti sulla privacy, e l’uso dei dati personali possono essere applicati alla robotica in generale, applicazioni pratiche riguardano ulteriori considerazioni e cioè la regolamentazione di standard per il concetto di “privacy by design”, il consenso informato e il criptaggio, cosi come l’utilizzo di dati personali si di esseri umani che di robot intelligenti che interagiscono con gli esseri umani.

I diritti di proprietà intellettuale inoltre devono essere considerati se si vuole accettare che ad un certo punto vi sarà la necessità di proteggere la “propria creazione intellettuale” di robot avanzati.

Una proposta nel cercare di affrontare queste questioni è stata quella di conferire ai robot “personalità elettronica”.

Una Proposta

La relazione del PE raccomanda che la Commissione UE esplori le implicazioni di tutte le possibili soluzioni legali, inclusa quella di creare uno specifico status legale per i robot, cosicchè almeno ai più sofisticati robot autonomi possa essere conferito lo status di “persona elettronica” con specifici diritti e obblighi, incluso quello di risarcire qualsiasi danno possano aver causato, ed applicare la personalità elettronica ai casi in cui i robot siano in grado di prendere autonome decisioni intelligenti o in ogni caso interagire autonomamente con esseri umani.

Se questa proposta è certamente un’idea valida, potrebbe volerci del tempo prima che sia applicabile a tutti i robot in quanto perchè un robot abbia lo status di “persona elettronica” le sue capacità autonome dovrebbero essere particolarmente pronunciate e avanzate.

Immaginare un regime dove la responsabilità dovrebbe essere proporzionata al reale livello di istruzioni date al robot ed alla sua autonomia, dovrebbe tener conto del fatto che ad una crescente capacità di apprendimento o autonomia del robot, dovrebbe corrispondere una minore responsabilità delle altre parti coinvolte, tenendo conto di quale tipo di sviluppo il robot ha avuto, quale tipo di istruzioni o “educazione”.

Tuttavia, non sarebbe sempre semplice discernere capacità derivanti dall'”educazione” data ad un robot da capacità che dipendano strettamente dalle sue abilità di auto-apprendimento. Questo implica che nel cercare di identificare l’allocazione delle responsabilità, si incapperebbe in enormi aree grigie.

Una via di mezzo è quindi necessaria per quei casi in cui il robot è capace di apprendimento e decisioni autonome ma adatto solo a specifici utilizzi e non ancora sofisticato al punto da essere dotato dello status di “persona elettronica”, come per esempio un veicolo autonomo.

Ritengo invece che una possibile soluzione a questo sia di attribuire a ciascuna intelligenza artificiale una personalità giuridica assimilabile a quella attribuita alle persone giuridiche.

I benefici sarebbero:

– registrazione/incorporazione del robot

– un’entità giuridica a cui attribuire la responsabilità, con regole specifiche e la possibilità di contrarre una copertura assicurativa

– la capacità di stipulare contratti reciprocamente e con gli esseri umani dai quali deriverebbero specifiche responsabilità e conseguenze per la violazione degli obblighi in essi previsti.

Un lato negativo di questa proposta è che questo tipo di status legale richiede un proprietario (un “azionista”) con responsabilità limitata, e questo significa che la responsabilità finale, sebbene limitata, non sarebbe necessariamente posta sul produttore ma sul proprietario, ritornando alla posizione di una protezione insufficiente.

Tuttavia, per esempio nel caso dei veicoli autoguidati, il proprietario dell’auto potrebbe essere considerato il proprietario dell’”entità giuridica” veicolo autonomo: in quanto tale avrebbe responsabilità limitata, e avrebbe l’obbligo di assicurare il veicolo.

Un altro tema riguarda la possibile difficoltà in taluni casi di “delimitare” fisicamente l’abito fisico di una specifica intelligenza artificiale.

Chiaramente, la questione deve essere ancora studiata e ponderata e possibili soluzioni evolveranno parallelamente allo sviluppo dell’intelligenza artificiale, credo però che in questo momento storico questa possa essere una valida soluzione da proporre per affrontare le attuali problematiche dell’intelligenza artificiale per come la conosciamo e comprendiamo ad oggi. Senza escludere che, forse fra qualche anno, non possa essere la stessa intelligenza artificiale a proporre le migliori soluzioni.

[Stefania Lucchetti was also quoted on her views on AI in]



© Stefania Lucchetti 2017.  Per ulteriori informazioni contattare l’autrice

Questo articolo può essere condiviso o riprodotto solo nella sua interezza e con credito e citazione dell’autrice.

Cross Border M&A – when to opt for a minority stake in a cross border joint venture

Whether your company has engaged in successful joint venture activities for years, or it is new to joint ventures, there is always an element of uncertainty when deciding to enter into a cross border joint venture, whether the objective is to expand reach and distribution of the company’s products and services in highly developed countries, or in emerging markets.

Whatever the key strategic purpose your company wants to achieve, there are two options for entering into a joint venture – which create different outcomes and specific governance issues. Whether the joint venture is established through an acquisition of an existing company or the set up of a joint venture vehicle, your company may either opt for a majority stake or a minority stake.

The decision to opt for a minority stake may be driven by various factors, including the power relationship with the JV partner.

In emerging markets, this choice is often driven by two key considerations:

Regulatory Constraints – Regulatory constraints in specific markets may cause the foreign investment to be restricted to minority investment levels.

Commercial Credibility – Accepting a minority stake may also reflect the need and strategy to enter the market with a credible local JV partner which has already established scale and reputation. This brings along an advantage where the JV partner is operating solidly and effectively in the emerging market environment, and has established government and public policy relations.

Aside from the above, this strategy may be useful or necessary for pure strategic purposes, where the JV partner has the commercial lead in the JV for example because it has proprietary technology, key products or client base/distribution platform which your company heavily relies on.

When deciding to enter into a JV where your company will hold a minority stake, the key point to be considered is that this structure requires a greater preparedness for your company to rely more heavily on the JV partner’s capacity to lead the joint venture and achieve common objectives.

In this circumstance, one of the key issues to be addressed is the establishment of minority protections both at the shareholders meeting and board level. This needs to be done by carefully negotiating and drafting a shareholders’ agreement and ancillary documents which include such protections, in order to achieve a governance structure that balances the powers of the JV partners to achieve the desired objectives.

 cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Augmented Reality Mirrors – Fashion Meets Digital and Privacy Concerns

If, like me, you have switched to e-commerce because you hate the experience of trying clothes on fitting rooms (as in fact do 46% of customers according to a survey conducted by Body Labs in 2016) but end up sending back half of your purchases because they don’t fit or look and feel different than what you expected by seeing them 2D on screen only, or you are a retailer trying to increase sales (apparently, shoppers who do use a fitting room are much more likely to make a purchase – see study by retail analytics company Alert Tech) you may be thrilled by the new trend in digital revolution for retail: digital mirrors.

We have already seen them in some fashionable Milan stores, although at this time they are more focused on infotainment and not yet as advanced as they could be, or purport to be.

Retailers already know the benefit of offering interactive, personalised in store experience – a customer is much more likely to walk out with a purchase if s/he receives personalized advice.
Digital mirrors may provide an innovative and efficient method of reinventing the fitting room experience by offering 360-degree views of outfits; touchscreen technology to browse other colours, sizes and suggested items that can be put together to create an entire outfit.

It won’t be long before the technology will offer personalised compliments and changing lighting conditions to make clothes look better.

Of course, there is a catch to digital mirrors in that while they can also provide useful information to the shop about the user experience, including which items are brought into the changing room, which items the shopper decides to buy out of the ones s/he has selected, etc, they render the changing room experience no longer private. E-commerce has long ago chipped into our private experience of shopping (be as it may, our shopping history on amazon or any other e-commerce platform is recorded), now the virtual changing room experience will remove another layer of privacy.

Is it worth it? It depends as always on the personal boundaries of each individual and the perceived benefit of digital shopping against private changing room.  For a number of shops which have already implemented augmented reality mirrors, one of the benefits for the shopper is not having to undress to try on certain garments, or explore new colors. It may not be long however until the virtual changing room will start marketing additional services to the shopper, such as a personalised diet plan and other similar suggestions.

Ultimately, the key concerns relate to privacy and data protection and the expanding reach of profiling and data recording on the single user’s preferences.  Stores will have to find a balance between user-experience, sale data and compliance with privacy laws. Ultimately, this will create a further segmentation in the market as mature shoppers will prefer more intimate, private changing room experiences, while young shoppers will probably flock into shops that feature a more public type of augmented reality mirrors (and will not be able to resist sharing the experience).

Luckily for European consumers, art. 17 of the new GDPR (Regulation (EU) 2016/679 – adopted on 8 April 2016, taking effect on 25 May 2018) includes a right to erasure (right to be forgotten) and art. 21 (Right to Object) may come useful.  These new provisions, which were adopted following the CJEU decision in the Google vs Spain case, allow individuals to require the data controller to erase their personal data without undue delay subject to certain conditions, eg where no other legal ground for processing applies.

This will however be often difficult to manage in practice as it requires the controller to inform third parties to which the data has already been disclosed that the data subject has requested erasure of link or copies of that data.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Cybersecurity and board responsibilities

The “Wannacry” ransomware attack that disrupted businesses around the world on 12 May has led to the need to consider more carefully the impact of a cyberattack and its implications not only on the protection of consumer data, but also on the company’s financial and sensitive data.

A cyberattack can not only cause the loss of a company’s consumer data, it can also expose confidential information relating to a company, such as ongoing regulatory investigations, or it may cause the loss of intellectual property other than of consumer data.  Financial risks as well as reputational risks are at stake for a company.

Boards are therefore increasingly coming to the realization that a data leek due to cybercrime is a serious risk management issue.

This is a challenge as while most directors are somewhat informed about cybersecurity, it is often very difficult for them to stay updated with the latest information, and especially to deploy sufficient investments to protect the company from ever changing cyber risk. Also, cybersecurity has in most companies been delegated to an IT manager with no sufficient budget or decision making power.

Accepting that this is a key enterprise risk which needs to be addressed at a board level and not just at an IT management level is an essential switch that boards need to make.

The key reason is that a lack of proper action may lead to board responsibilities towards the company (ie under Art. 2392 of the Italian Civil Code for example for lack of appropriate action to protect the company).

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation.