Why Artificial Intelligence Needs to be on Your Board’s Corporate Governance Agenda

Artificial Intelligence means many things at many levels. The most advanced form of Artificial Intelligence – or AGI (Artificial General Intelligence) – may not come to happen for a few years (or decades). However entrepreneurs, investors and board members need to be aware of what it is, what it could be, which changes it could bring about and what it could mean for their business.

At a more daily level, Artificial Intelligence is already part of our lives, and more specifically, of business.  Artificial Intelligence at its most basic level – or Artificial Narrow Intelligence as it is called (ANI) – is software which can process huge amounts of data (“big data”) based on a set of rules or instructions (“algorithms”) and turn it into meaningful information and problem solutions.

Artificial Intelligence is everywhere, most notably in smartphones and on a daily basis we interact with algorithmic based services such as Spotify, Amazon, Facebook, Netflix.

Some AI driven organizations like Facebook, Google (Alphabet), Amazon, IBM and Microsoft are investing greatly on AI development.  Algorithms drive their business.  However all other “traditional” industries are also being greatly impacted by AI: the automotive industry is facing a revolution with AI powered self driving cars (see my previous post Why Artificial Intelligence Will Need a Legal Personality), the retail supply chain is becoming increasingly efficient thanks to data and AI.

Not everyone is eager for AI to develop and although there are scientists (like Ray Kurzweil) eager to push the development of AI to the next level, some influential personalities in the field (notably Stephen Hawking and Elon Musk) have raised warnings about the need to thread carefully in the rush to develop and deploy AI.  Whatever your personal position on the matter, it is however undoubtedly true that all companies will adopt or continue to adopt increasingly sophisticated AI technologies at some level in the coming months or years to stay abreast of the market, be it to implement Industry 4.0 production and logistics solution or to meet their customers’ needs.

This is why boards of any industry cannot at this stage ignore the impact of Artificial Intelligence and what it means for their business, for their competitors’ business, what kind of opportunities it may bring and what kind of challenges and risks, and need to include a discussion about it in their corporate governance agenda.

What should a board be talking about when discussing Artificial Intelligence?

First of all, cybersecurity – which I have already discussed in a previous post (see Cybersecurity and board responsibilities). I will reiterate that data is one of the most valuable assets a company has – be it its customers’ data, its know how and IP, its historical records, data about its business operations and any kind of data that flows through the company’s servers.

Secondly, implementation of AI technology to the company’s core business – what kind of technology to purchase and what to use it for. This involves all industries (including the very traditional legal industry which is now being targeted with increasing demands to purchase expensive AI due diligence and disclosure technology).

Purchasing an AI based technology often involves processing and sharing data with the technology provider, and this again goes back to the point about cybersecurity and solid data infrastructure.

Finally, the need to update its language skills to understand the language of AI. I have discussed in a previous post (Self Aware Contracts) the language gap between traditional industries powered by natural language and the new developments brought about by AI powered enhancements which create the need for communication which was carried out in traditional language (eg, contracts) to be "translated" into machine language.

This means that developing those language skills, much like learning a second language, or bringing to the board table someone who has those language skills can and should be an important corporate governance priority for a company's board of directors.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017. For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Africa’s Digital Acceleration

The first time I visited Kenya and Tanzania it was at the beginning of year 2008 and I was surprised to see how countries with no phone lines had bypassed the need for infrastructure with mobile connection which was at times more widespread than in rural areas of Asia or even Europe.

Several years went by and in my second trip to Kenya (and first of many) in 2016 I discovered a country that rather than only accept deployment of existing technologies from US or Europe is also striving to be at the forefront of increasing innovation. Take M-Pesa (M – of course – for mobile, pesa meaning money in Swahili).  M-Pesa is a mobile phone – based money exchange platform, launched at the end of last decade by Safaricom and Vodacom in Kenya and Tanzania.  The service allows users to deposit money into an account stored on mobile phones, and transfer money on a direct peer-to-peer exchange through text messages, completely doing away with bank services or other intermediaries.

Mobile-based payment is a technology that has not quite taken off in Europe, US or Asia – though Apple pay is trying to push it through, with limited success – but has been transformative for microeconomy in Kenya, where there is still a large slice of population which does not have a bank account (and I might add – for those who do – ATM machines do not work as often as one might expect or hope for).

Since then, M-Pesa has spread wings also in a number of other countries in Africa, Middle East and Eastern Europe.  In the meantime, the appetite for digital services has grown. In May this year, Telkom Kenya launched a – very appreciated by its users – free whatsapp service.  The move could steer users from Telkom Kenya’s competitors, while at the same time allowing time for digital services powered by mobile apps (such as Amazon and Amazon Prime – which are not yet available in Kenya but hopefully soon will) to grow and spur data use.

Other African countries have joined the bubbling pot: in late 2016 Senegal announced the launch of eCFA Franc: a digital currency. It is not yet clear what kind of success eCFA Franc is having, but the idea and potential of it are certainly a promising start.

Education is not lagging behind: in April this year for example the Rwanda government announced a partnership with Microsoft with the intention to digitise Rwanda education through a “smart-classroom” project.  Nairobi schools offer very advanced programs fully integrating solid academics with access to technology.

What is the source of the wind behind Africa’s innovation streak?

Most African countries are developing economies with great potential for growth. In the past decades progress was halted for lack of  traditional old economy infrastructure (eg roads and landlines). However by doing away with the need to invest capitals and work in building infrastructure, and jumping straight to digital, there is now space to experiment with new technology without being weighted down by bottlenecks of existing infrastructure and its regulatory constraints.

Disintermediation is also the key to new possibilities. Where intermediation finds its bottle neck in bureaucracy and corruption, the possibilities offered by peer to peer technology (such as M-Pesa) and disintermediating technology such as blockchain are infinite.

Urban planning is another area where innovation would be of great benefit to Africa. Take the city of Nairobi, where a very modern lifestyle battles with the absence of modern and well connected roads while the Jomo Kenyatta International Airport is better connected than our (sadly) decreasingly connected Milan airports and the small and surprisingly efficient Wilson airport serves flights to a number of regional touristic and business destinations. A futuristic approach might do away with the need for roads and jump straight to drone transportation for logistics or even – with a further technological acceleration – experimenting with flying cars.

It is to be expected that a further acceleration in innovation will appear when entrepreneurs gain wider access to financing via venture capital funds or also non traditional means, such as crowdfunding.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017. For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

 

Why Artificial Intelligence Will Need a Legal Personality

The development of robotics and artificial intelligence (AI) is an exciting, relentless reality which is slowly making its way out of science fiction movies and into our mundane world.

Furthermore, people and technology are increasingly interacting at an individual, daily level.  The increased occasions of interaction between human and AI systems have great potential not only for economic growth but also for individual empowerment, as explained also in the January 2017 McKinsey Global Institute report, which interestingly finds as almost every occupation has partial automation potential, however it is individual activities rather than entire occupations that will be highly impacted by automation.  Consequently, it concludes that realizing automation’s full potential requires people and technology to work hand in hand.

This interaction however triggers a complex set of legal risks and concern. Ethical issues are raised as well.

The key legal issues to be addressed with some urgency are human physical safety, liability exposure and privacy/data protection.

Ethical concerns cover dignity and autonomy of human beings and include not only the impact of robots on human life, but also, conversely the impact of the ability for a human body to be repaired (such as with bionic limbs and organs), then enhanced, and ultimately created, by robotics and the subtle boundaries that these procedure may push over time.

The current legal frameworks are by definition not wired to address the complex issues raised by AI. The consequence of this is the need to find a balanced regulatory approach to robotics and AI developments that promotes and supports innovation, while at the same time defining boundaries for the protection of individuals and the human community at large.

In this respect, the European Parliament (“EP”) on 31 May 2016 has issued a draft report on civil law rules on robotics. The report outlines the European Parliament’s main framework and vision on the topic of robotics and AI.

While the report is still speculative and philosophical, it is very interesting – especially where it defines AI, and therefore “smart robots” as machines having the following characteristics:

  • The capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data
  • The capacity to learn through experience and interaction
  • The form of the robot’s physical support
  • The capacity to adapt its behaviours and actions to its environment.

The EP’s report also broadly defines six key regulatory themes which are raised by developments in the area of robotics and AI:

  • rules on ethics;
  • rules on liability;
  • connectivity, intellectual property, and flow of data;
  • standardisation, safety and security;
  • education and employment;
  • institutional coordination and oversight.

The report concludes that implications of these technologies are necessarily cross border and it would therefore be a waste of resources and time for each individual country to set out individual rules, recommending a unified EU regulation.

Truly, the implications are cross border and require a collaborative effort, although it is wise to presume that certain countries will be more open minded and flexible than others in defining the limits of AI autonomy, or more restrictive in setting out its boundaries and it might also be inevitable for certain countries to lead the way in regulating AI and robotics.

The policy areas where, according to the EP’s position, action is necessary as a matter of priority include: the automotive sector, healthcare, and drones.

The Liability Issue

The increased autonomy of robots raises first of all questions regarding their legal responsibility. At this time, robots cannot be held liable per se for acts or omissions that cause damage to other parties as they are a machine and therefore liability rests on the owner or, ultimately, producer.

When pointing out the automotive sector as an urgent area needing regulation, the committee was certainly thinking of self-driving cars, which are already being tested in California and driverless cars trial is set for UK motorways in 2019 and government funding has been dedicated to research on autonomous cars. In September 2016, Germany’s transport minister proposed a bill to provide a legal framework for autonomous vehicles which assigns liability on the manufacturer.

However, in a scenario where a robot can take autonomous decisions, ownership / manufacturing traditional liability chain is insufficient to address the complex issue of a robot’s liability (both contractual liability and non-contractual liability), since it would not correctly identify the party which should bear the burden of providing compensation for the damage caused. This civil liability issue is considered “crucial” by the committee.

Data protection, and intellectual property righs

Other key issues in relation to the developments in robotics are the rules on connectivity, and data protection.  While existing laws on privacy, and use of personal data can be applied to robotics in general, practical applications may require further consideration, eg standards for the concepts of “privacy by design” and “privacy by default”, informed consent, and encryption, as well as use of personal data both of humans and of intelligent robots who interact with humans.

Intellectual property rights are also to be considered if one wants to go as far as to accept that there will be at some point a need to protect the “own intellectual creation” of advanced autonomous robots.

Proposals to address these issues have been to assign to the robots an “electronic” personality.

A Proposal

The EP’s report recommends the EU Commission to explore the implications of all possible legal solutions, including that of creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of indemnifying any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.

While this is a good idea, it might take time until it is applicable to all robots as for a robot to have the status of an “electronic person” its autonomous capabilities would need to be particularly enhanced.

Imagining a liability regime where liability would need to be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, taking into account which kind of development the robot has had, which kind of instructions or “education”.

However, it would not be always easy to discern skills resulting from ‘education’ given to a robot from skills depending strictly on its self-learning abilities.  This implies that when trying to identify responsibility, there would be huge grey areas.

A middle-level solution is needed for those situations where a robot is capable of autonomous learning and decisions but apt only to specific uses and not yet sophisticated to the point of being endowed with the status of electronic person, such as might be an autonomous car.

I believe instead that one possible solution to this could be provide each AI a legal personality akin to that currently afforded to corporations.

The benefit of this would be:

– registration/incorporation of the robot

– a head of responsibility, with specific rules and an entity to be considered in terms of liability and insurance

– ability to enter into contracts with each other and with humans with specific responsibilities arising out of the breach of such contracts.

One downside of this is that this type of legal status still requires an owner (a “shareholder”) with limited liability, and this means that the ultimate responsibility, although limited, would not necessarily be placed on the manufacturer, but on the owner, thereby returning to the position of an insufficient protection. However, for example in the case of autonomous cars, the owner of the car could be considered as the holder of the legal entity, with limited liability, having an obligation to ensure the vehicle.

Clearly, the topic still needs to be explored and possible solutions will evolve with time as practical problems arise and AI develops, but I believe that at this time this might be the best solution to put forward to address current concerns related to AI as we know them and understand them.  Ultimately, perhaps, it will be AI itself to propose a solution.

 cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Cybersecurity and board responsibilities

The “Wannacry” ransomware attack that disrupted businesses around the world on 12 May has led to the need to consider more carefully the impact of a cyberattack and its implications not only on the protection of consumer data, but also on the company’s financial and sensitive data.

A cyberattack can not only cause the loss of a company’s consumer data, it can also expose confidential information relating to a company, such as ongoing regulatory investigations, or it may cause the loss of intellectual property other than of consumer data.  Financial risks as well as reputational risks are at stake for a company.

Boards are therefore increasingly coming to the realization that a data leek due to cybercrime is a serious risk management issue.

This is a challenge as while most directors are somewhat informed about cybersecurity, it is often very difficult for them to stay updated with the latest information, and especially to deploy sufficient investments to protect the company from ever changing cyber risk. Also, cybersecurity has in most companies been delegated to an IT manager with no sufficient budget or decision making power.

Accepting that this is a key enterprise risk which needs to be addressed at a board level and not just at an IT management level is an essential switch that boards need to make.

The key reason is that a lack of proper action may lead to board responsibilities towards the company (ie under Art. 2392 of the Italian Civil Code for example for lack of appropriate action to protect the company).

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Token Sales and ICOs: why and when you need legal advice

Digital token sales are creating a gold rush on the web, with a token sale advertised every day in digital circles and social media and the movement – in some cases – of huge sums of money.

Token sales still move in international waters from a legal and regulatory point of view, however international regulators are increasingly paying attention to the issue and considering how to regulate certain specific risks associated with digital tokens, such as money laundering risks.

What is a token sale?

A digital token (“token”) is an intangible asset, cryptographically -secured (typically based on blockchain technology). It usually has a monetary value (based on a virtual currency exchange or cryptocurrency) and may entitle the token holder to certain rights (and potentially obligations and liabilities). Such rights and obligations may be set out in "normal" paper documents (such as an offering document or whitepaper) or may be included in a smart contract.

Tokens may be offered to raise funds for a project, in which case the token sale is labelled as "crowdfunding", or may give access to permanent rights and obligations, or even shares (or less regulated "units") of a company, in which case the offering may be labelled  "ICO (Initial Coin Offering)".

Once purchased, the token may or may not be able to be traded or – sometimes with limitations – exchanged back for money.

As token sales gain momentum, aside from obvious financial considerations (extreme volatility) and common sense assessment (whoever trades on the web needs to be able to recognize bogus offers such as Ponzi schemes), participants also need to be aware of a number of legal issues that need to be considered when participating in a sale – including when they need to seek legal advice.

I set out below a few important points to be considered, keeping in mind that the legal and regulatory landscape regarding cryptocurrencies and token sales is rapidly evolving, and the practice is evolving as well.  Recent token sales for examples have restricted participation from certain jurisdictions which raise legal/regulatory issues, eg the US and Singapore.

Do your due diligence

What is the underlying project for which the token is sold? What value does it propose to bring, who are or would be its customers? Is it technically sound, has it been economically analysed, does it have a specific timeline and how is the issuer accountable for the timeline?

Is there an actual organisation behind the project? What kind of organisation is it – a company, fund, trust, a DAO? Who is the team behind  the project? Do the individuals have a track record of successful projects? Is it a dedicated team or a “borrowed” team? Is the project seeking its first funds or does it have some institutional “real-life” investors?

Is the project legal?  Is it based in a specific country (and is it legal in that country) or is it completely virtual? Even in case it is completely virtual, where do the key participants to the project reside?

Is the project legal in your country? Is your participation in the project subject to approval, registration or a license?

And finally, what does the token do? What kind of rights does the token gives access to? What kind of activities does it enable? Is it genuinely attached to a project or does it look like a Ponzi scheme?

Assess the documentation

The token sale will be described and offered through a document (whether an offer document, a white paper, or a descriptive section of the website). Representations and warranties will be asked of the buyer eg as to his/her capacity to participate to the sale.

Some information included in the documents needs to be assessed carefully, in particular:

  • Whether and how you will be able to sell back the tokens
  • The existence of a lock-up period and what parameters it is tied to
  • Whether and how you will be able to trade the tokens to another investor (“secondary market”), always keeping in mind that this may attract other regulatory and legal issues
  • The issuer’s policies about data protection
  • The issuer’s cybersecurity policy
  • Termination events, or what happens if the project is interrupted.

Note that the more reputable issuers conduct anti-money laundering (AML) and know your customer (KYC) checks. It is always a good sign when AML/KYC procedures are set out as it means that the issuer is concerned about regulation.

Be aware of applicable (and evolving) regulations

The legal and regulatory landscape regarding token sales is uncertain and currently evolving. A key concern up to now has been money laundering and terrorist financing risks when transactions are anonymous (less so when the issuer carries out know your client procedures, see above) and the large quantity of funds raised and moved internationally and in a short time.

Other regulations also apply. A number of regulations will apply to the issuer based on where the issuer’s organization is incorporated and certain regulations will apply in jurisdictions where the project is based or where the buyer is based, such as consumer protection and data protection laws.

Most jurisdictions do not (at this stage) regulate virtual currencies per se however a number of international securities authorities are studying how to regulate activities involving digital tokens which do not function exclusively as virtual currencies, such as when they represent ownership or a security interest in an issuer’s assets or property, or represent a debt owed by an issuer so that they may be considered a debenture under certain jurisdictions’ laws.

The Monetary Authority of Singapore (MAS) for example clarified just on 1 August that the offer or issue of digital tokens in Singapore will be regulated by MAS if the digital tokens constitute products regulated under the Securities and Futures Act (SFA).

A lack of compliance by the issuer with applicable regulations may be unsafe for the buyer as well as while liability for compliance may fall on the issuer, lack of compliance may have consequences at best on the value of the token and at worse on the legality of the project.

Token sale may be suspended if the sale should have been approved or registered, any token you have bought may become worthless. If the issuer is investigated, the project may be interrupted. The buyer may be subject to additional obligations that were not set out in the initial documents.

A final note about tax, as tax advice should be obtained when trading tokens as some jurisdictions have stringent capital controls that may apply to cryptocurrencies and tax may be attracted in respect of any capital gains arising from the tokens.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017. For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation.  This post is for information only and is it is not to be considered legal advice.

 

Self Aware Contracts enabled by Ledger Technology (or Smart Contracts on Blockchain)

On 17 July Estonian legal tech Agrello went public with a cryptocurrency-based crowdfunding campaign structured as a token sale. Leaving aside the issues raised by token sales which will the the subject of a separate post, it is interesting to explore the value proposition of Agrello’s business, which is a legal tech aiming to change the market of commercial contracts.

Agrello was founded by a team of Estonian lawyers, academics, and information technology experts, with the vision of creating digital contracts that will change the way contractual parties interact with each other and interface with legal authorities.

Agrello-framework proposes what it defines as “blockchain-driven self aware agents-assisted contracts for a decentralized peer to peer economy”.

In plain words, smart contracts.

Agrello’s proposition is that while the traditional understanding of conventional contracts is an exchange of commitments by identified parties that are enforceable by law, formalized by a written document as evidence, when the commitments formalized under the contract are performed, the status of such commitments changes overtime and the agreement needs to be constantly updated to keep track of the evolving relationship between the parties, in particular whether the parties have or not complied with their obligations under the contract.

A blockchain based system would instead allow for an intelligent contract, which can keep track of the parties’ commitments and evolve over time. The blockchain is a ledger based technology which enables a trustworthy collaborative process because no single entity is in control and information is recorded through a programming language in an irreversible manner and confirmed once it is recorded in a number of different locations.

The parties would record their interactions as they progress all through the phases of negotiation and conclusion of the contract, performance and eventually termination of the contract – for example in the case of a tenancy agreement or a services agreement.

The idea is enticing and a few law firms have already signed up for the beta version of the technology.  The project is ambitious not only because it is new both from a technology and cultural point of view, but especially because it aims to bridge a large and perilous language gap: that between the traditional legal industry and the cutting edge blockchain technology.

Also the technology would certainly be beneficial from certain points of view but it also will need to address a number of difficult issues.

Benefits:

  • proof of action, in that the ledger can keep proof of payments made, actions taken, however only if they are digitally recordable
  • in traditional contracts, lawyers need to review the contract to check if an obligation was not performed, eg a deadline was missed. With a smart contract, the contracting parties and the lawyers no longer need to read and interpret the contract as the software agent transforms the contract obligations into logical machine readable obligations
  • permanent archive accessible by the parties without the need to refer to physical archives or an individual’s memory
  • information about payments can be cross referenced directly into other relevant ledgers, such as the company’s financial records
  • no need for intermediaries in the management of the contract (provided that the contracting parties can use the technology)

Issues:

  • monitoring of communications and keeping a ledger of interactions might make the relationship more crystallized and create further problems in contexts where a fluid relationship
  • the creation of smart contracts involves the use of a programming language (at this time the language used for programming contracts is Solidity) which legal professionals do not understand. And at the same time programmers do not understand legal language. It will therefore be very difficult to translate legal concepts into a programming language, also it will be difficult to litigate the contracts in front of a court, or even an arbitrator, as the programming language used does not allow for articulate language or nuances of expression.
  • the program risks becoming the judge of the contract and not only the keeper of the contract
  • the absence of nuances creates a crystallized relationship with no scope for human intervention in facilitating a soft resolution of problems

Other benefits and issues for sure will arise with adoption of the technology. Certainly, blockchain ledgers applied to contracts have the potential to lower costs and time spent on creation, update and archive of relevant information.  Automation also has a huge benefit in facilitating legal interactions and transparency, as information would be more easily available to interested agents eg tax authorities.   The risks are those generally explored of the limits of artificial intelligence, and the boundaries over which interactions facilitated exclusively by artificial intelligence can replace human judgment and human negotiation. The UK experiment of establishing online courts will run concurrently with smart contracts technology in verifying the limits of artificial intelligence applied to a legal context.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017. For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

 

Do the risks of AI derive from AI being more human than we think?

On 12 June, Magic Circle firm Slaughter and May jointly with ASI Data Science published a white paper (the “Paper”) titled Superhuman Resources – Responsible Deployment of AI in Business , dedicated to exploring the benefits, risks and potential vulnerabilities of AI.

The Paper is well written, well researched and very interesting and I strongly recommend to read it.

It offers some elements of history of AI and a clear and simple definition of much used (and abused) terms such as Static v dynamic machine learning, General v narrow AI, Supervised v unsupervised machine learning.  It defines AI as a “system that can simulate human cognitive processes” and also goes as far as to define the human brain as a “computer on a biological substrate”.

The Paper then goes on to identify the already well known potentials of AI – ability to synthesize large volumes of data, adaptability and scalability, autonomy, consistency and reliability, as well as the less known or acknowledged potential for creativity.  Machines have in fact in recent years shown a surprising ability to produce creative works in the area of art, writing and design. News writing bots have already made their appearance in news articles such as Washington Post’s AI journalist Heliograf and other works of AI produced visual and expressive art have been widely published.

The key message of the Paper however is that while commentary on and investments in AI and machine learning seem so far to have focused on the potential upsides, a critical point (according to the authors) is missing: that AI can “only ever be useful if it can be deployed responsibly and safely”. The Paper then goes on to identify, and analyse in some depth, 6 categories of risk, 1. Failure to perform, 2. Social Disruption; 3. Privacy; 4. Discrimination (ie data bias); 5. Vulnerability to misuse; and 6. Malicious Re-Purposing.

Although the analysis is excellent, I do not agree with the Paper’s claim that the risks of AI have so far been ignored. The risks of AI have on the contrary often been magnified, if not only in innumerable well constructed science fiction movies, and regulators are not oblivious to AI. As already mentioned in a recent post, the EU Parliament has already set out, albeit not with such rigorous scientific approach and structure, a number of legal issues which AI raises and will raise and that need to be address for reliability and safety (see my earlier post Why Robots Need a Legal Personality ).

What struck me most from the Paper however was however that in providing a structured and detailed list of potential risks and pitfalls of AI, it also highlights that the key issue underlying them all derives from the fact that automated processes, as the report itself says by quoting Ian Bogost [2015, The Cathedral of Computation] “carry an aura of objectivity and infallibility” – yet AI is not infallible. Specifically it isn’t if : the system fails (Failure to Perform); it handles data disregarding privacy laws; it makes decisions having been exposed to data which is limited or biased. AI also can create- like any innovation – social disruption, it could be manipulated to be used [by humans] for mean purposes, or it could be maliciously re-purposed in the wrong hands.

So we learn that AI is subject to system errors or application errors, may make wrong judgments if it is only exposed to limited data, and may be subject to manipulation.

In essence, the key risks of AI derive from the fact that AI behaves more like a human than humans would think or hope for!  But then, if AI is a “system that can simulate human cognitive processes” wouldn’t fallibility be an intrinsic characteristic of AI, only perhaps with a degree of fallibility lower than that of humans?

The Paper recommends that businesses “be forward thinking and responsible” in designing and deploying AI which by design has systems that mitigate or restrict potential negative effects, and in particular to set out procedures, such as risk assessment and risk register, monitoring and alerting, audit systems to determine causal processes and accountability frameworks for algorithms.

This is all very valuable and very important. I still believe however that it misses the key point, exciting and risky at the same time, and that is that by allowing a machine to develop cognitive processes (and I deliberately use the word “develop” as I am not sure if “simulate” would correctly represent machine learning or creative expression) a new intelligent entity is created, which at some point will, or might, develop to the point of not being limited to being an instrument to be used at the hands of humans in accordance to human instructions, but will, or at least might, be self directed.  As I have already pointed out in my other earlier post AI legal issues, one of the key issues relating to the next AI generation is that they will have the ability not just to operate on big data based on algorithms designed and built in by humans, but to create their own algorithms.

And this brings me back to my original point, which is that one of the key legal issues to be addressed is to determine in which cases and at which point of development an AI needs to be provided with a legal personality.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

 

 

Big Data and the Music Paradox

It is recent news that Spotify has settled a complex licensing dispute involving mechanical licensing rights, allegedly in order to clean up its affairs in view of a prospective IPO.

Mechanical rights under US law must be obtained (in addition to performance rights) when reproducing a piece of music onto a physical or digital support.  It is not clear if Spotify willingly avoided to pay mechanical rights or – as it claimed – it had no availability of the data necessary to sort out which publishers had legitimate claims over songs (there isn’t a central and reliable database covering all music rights to all songs).

Music licensing is a complex and disarticulated. Different countries have different nuances of copyright applicable to music (and different ways and means of collecting royalties). In some countries, particularly the US, the music publishing sector has traditionally licensed the performing rights and mechanical rights separately through different entities. This means that music distributors need to have license covering both the song and the recording, and both performing and mechanical rights. In the US this issue is partially addressed through a compulsory licence covering mechanical rights with a pre-set statutory rate to be paid, so streaming services are not required to negotiate terms and price with each right holder. However, often the owner can’t be identified, as while there are collecting societies that licence performing rights, mechanical rights are not represented by a single society nor is there a single publicly accessible database providing this information, therefore – according to Spotify – making it impossible (or too burdensome) for a streaming music provider to comply with mechanical rights obligations for all songs.

Whatever the reasons for Spotify’s legal lapse, certainly, it is a fact that digital distribution of music and particularly streaming needs to take a further leap forward in its ongoing legal catch-me-if-you-can race which has been going on for the past 20 years – since the time of 1999 Napster.

I have been a late adopter of Spotify but their theme-based compilations, and especially their running compilations which select songs matching your personal running beat were recently a revolutionary discovery for me.  As a lawyer and a mom of two boys, time for listening to music – or especially time for discovering new music and updating my playlists has been one of the first to disappear on my schedule, with the effect that music slowly started disappearing from my life. Yet, discovering new music and enjoying music had always been one the most fundamental and joyous artistic experiences for me.

Then I discovered Spotify. Just this morning, while I was running to one of Spotify’s compilation which offers songs matching the user’s running beat, I listened to about 15 songs that I had never heard of, of artists I have never heard of.  This certainly was not possible in pre-digital ages, where buying a tape or a CD was so expensive that you would listen to the same music or playlist over and over again for months on end until a boyfriend/girlfriend would introduce you to some new playlist of his/hers by copying it on tape or CD. And you would listen to that for months on end.  But it wasn’t so even in the iTunes years – iTunes made buying music affordable, but in order to listen to a song a user still had to know it, select it, download it. The only way to discover new music was the radio, with advertising and limited availability of choice and customisation as songs were selected by a human mind – the radio host.

With services like Spotify, music enters the realm of big data and a seemingly infinite number of music pieces are available and playlists for all tastes, moods, desires and functional needs are created by algorithmic configurations.  This is a new radical change in the dynamic of the music industry – particularly the relationship between listeners and music is revolutionized.  The magic of data-driven approach applied to streaming is that music is available in such quantity and variety, that paradoxically the relationship between user and music is disinter-mediated and direct because the algorithmic data stream allows for more nuanced experiences and choice.

The consequence of this is that an average user can access music and artists that s/he would perhaps have never considered before – and at the same time artists that would have had no clout are heard of by a greater audience. This certainly is a boost for the music industry, and for any single musician who wishes to expand the reach of her/his music.  At the same time, disintermediation necessarily bypasses those structures that had been put in place in previous eras to protect legal interests.

It is certainly necessary to create new structures where music can be discovered without getting caught up in legal tangles, while at the same time compensating an artist for beautiful music.

As data driven services emerge in the music industry, a data driven approach needs to be adopted by music societies as well. I am imagining a universal music society with a database to which artists and music labels sign up and songs and recordings are matched by a Shazam-type service (with conflicts resolved through an online dispute resolution service).  All streaming and downloading services would link to this database and payment to the relevant right-holder would be automatic and immediate. I go further by imagining different levels of payments, where for example new songs by unknown artists are remunerated based on a rating level by listeners, so that copyright compliance can be a boost to music discovery rather than a gateway to its distribution.

cropped-foto-stefania-sito-web-3.jpg

© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

AI legal issues

Al WiredNextFest di questi giorni si è accennato alle tematiche giuridiche sollevate dall’evoluzione del AI.
E’ mancato però uno spunto importante: i problemi giuridici che riguardano la prossima generazione di AI, in particolare per quanto riguarda la responsabilità, derivano dalla capacità di un AI di create i propri algoritmi e non solo di operare sulla base di quelli impostati dal costruttore. 

At WiredNextFest held in Milan these past few days there were conversations about legal issues deriving from the development of AI.  There was a key point missing however: one of the key issues relating to the next AI generation is that they will have the ability not just to operate on big data based on built in algorithms, but to create their own algorithms

https://lawcrossborder.com/2017/05/22/why-robots-need-a-legal-personality/

A proposal for a Family Business Corporate Governance Code

On 23 May at Bocconi University’s campus AIdAF-EY and Bocconi University presented their proposal for Family Business Voluntary Corporate Governance Code.

The Code was prepared by prof. Alessandro Minichilli and prof. Maria Lucia Passador.

Adherence to the Code would be voluntary. Its purpose is to create a reliable governance structure for non listed family businesses.  In recent years, several European countries, such as Belgium, Spain and Finland have promoted corporate governance codes for non listed companies.  IFC (World Bank Group) in 2011 published a Family Business Governance Handbook.

The benefits of adopting a reliable and transparent corporate governance structure are numerous for a non listed company.  A corporate governance structure not only protects the company’s business and assets, but it also makes the company more reliable to third party business partners and potential investors, especially international businesses.  It also attracts outside talent – as outside managers are often reluctant to join a close knit family business.

This is very good news especially in Italy, where family businesses contribute 94% of the national GDP (source: FFI Datapoints), however they often struggle on succession planning and expansion, which transparency and clear rules.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Cross Border M&A – when to opt for a minority stake in a cross border joint venture

Whether your company has engaged in successful joint venture activities for years, or it is new to joint ventures, there is always an element of uncertainty when deciding to enter into a cross border joint venture, whether the objective is to expand reach and distribution of the company’s products and services in highly developed countries, or in emerging markets.

Whatever the key strategic purpose your company wants to achieve, there are two options for entering into a joint venture – which create different outcomes and specific governance issues. Whether the joint venture is established through an acquisition of an existing company or the set up of a joint venture vehicle, your company may either opt for a majority stake or a minority stake.

The decision to opt for a minority stake may be driven by various factors, including the power relationship with the JV partner.

In emerging markets, this choice is often driven by two key considerations:

Regulatory Constraints – Regulatory constraints in specific markets may cause the foreign investment to be restricted to minority investment levels.

Commercial Credibility – Accepting a minority stake may also reflect the need and strategy to enter the market with a credible local JV partner which has already established scale and reputation. This brings along an advantage where the JV partner is operating solidly and effectively in the emerging market environment, and has established government and public policy relations.

Aside from the above, this strategy may be useful or necessary for pure strategic purposes, where the JV partner has the commercial lead in the JV for example because it has proprietary technology, key products or client base/distribution platform which your company heavily relies on.

When deciding to enter into a JV where your company will hold a minority stake, the key point to be considered is that this structure requires a greater preparedness for your company to rely more heavily on the JV partner’s capacity to lead the joint venture and achieve common objectives.

In this circumstance, one of the key issues to be addressed is the establishment of minority protections both at the shareholders meeting and board level. This needs to be done by carefully negotiating and drafting a shareholders’ agreement and ancillary documents which include such protections, in order to achieve a governance structure that balances the powers of the JV partners to achieve the desired objectives.

 cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation.