Why Artificial Intelligence Will Need a Legal Personality

The development of robotics and artificial intelligence (AI) is an exciting, relentless reality which is slowly making its way out of science fiction movies and into our mundane world.

Furthermore, people and technology are increasingly interacting at an individual, daily level.  The increased occasions of interaction between human and AI systems have great potential not only for economic growth but also for individual empowerment, as explained also in the January 2017 McKinsey Global Institute report, which interestingly finds as almost every occupation has partial automation potential, however it is individual activities rather than entire occupations that will be highly impacted by automation.  Consequently, it concludes that realizing automation’s full potential requires people and technology to work hand in hand.

This interaction however triggers a complex set of legal risks and concern. Ethical issues are raised as well.

The key legal issues to be addressed with some urgency are human physical safety, liability exposure and privacy/data protection.

Ethical concerns cover dignity and autonomy of human beings and include not only the impact of robots on human life, but also, conversely the impact of the ability for a human body to be repaired (such as with bionic limbs and organs), then enhanced, and ultimately created, by robotics and the subtle boundaries that these procedure may push over time.

The current legal frameworks are by definition not wired to address the complex issues raised by AI. The consequence of this is the need to find a balanced regulatory approach to robotics and AI developments that promotes and supports innovation, while at the same time defining boundaries for the protection of individuals and the human community at large.

In this respect, the European Parliament (“EP”) on 31 May 2016 has issued a draft report on civil law rules on robotics. The report outlines the European Parliament’s main framework and vision on the topic of robotics and AI.

While the report is still speculative and philosophical, it is very interesting – especially where it defines AI, and therefore “smart robots” as machines having the following characteristics:

  • The capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data
  • The capacity to learn through experience and interaction
  • The form of the robot’s physical support
  • The capacity to adapt its behaviours and actions to its environment.

The EP’s report also broadly defines six key regulatory themes which are raised by developments in the area of robotics and AI:

  • rules on ethics;
  • rules on liability;
  • connectivity, intellectual property, and flow of data;
  • standardisation, safety and security;
  • education and employment;
  • institutional coordination and oversight.

The report concludes that implications of these technologies are necessarily cross border and it would therefore be a waste of resources and time for each individual country to set out individual rules, recommending a unified EU regulation.

Truly, the implications are cross border and require a collaborative effort, although it is wise to presume that certain countries will be more open minded and flexible than others in defining the limits of AI autonomy, or more restrictive in setting out its boundaries and it might also be inevitable for certain countries to lead the way in regulating AI and robotics.

The policy areas where, according to the EP’s position, action is necessary as a matter of priority include: the automotive sector, healthcare, and drones.

The Liability Issue

The increased autonomy of robots raises first of all questions regarding their legal responsibility. At this time, robots cannot be held liable per se for acts or omissions that cause damage to other parties as they are a machine and therefore liability rests on the owner or, ultimately, producer.

When pointing out the automotive sector as an urgent area needing regulation, the committee was certainly thinking of self-driving cars, which are already being tested in California and driverless cars trial is set for UK motorways in 2019 and government funding has been dedicated to research on autonomous cars. In September 2016, Germany’s transport minister proposed a bill to provide a legal framework for autonomous vehicles which assigns liability on the manufacturer.

However, in a scenario where a robot can take autonomous decisions, ownership / manufacturing traditional liability chain is insufficient to address the complex issue of a robot’s liability (both contractual liability and non-contractual liability), since it would not correctly identify the party which should bear the burden of providing compensation for the damage caused. This civil liability issue is considered “crucial” by the committee.

Data protection, and intellectual property righs

Other key issues in relation to the developments in robotics are the rules on connectivity, and data protection.  While existing laws on privacy, and use of personal data can be applied to robotics in general, practical applications may require further consideration, eg standards for the concepts of “privacy by design” and “privacy by default”, informed consent, and encryption, as well as use of personal data both of humans and of intelligent robots who interact with humans.

Intellectual property rights are also to be considered if one wants to go as far as to accept that there will be at some point a need to protect the “own intellectual creation” of advanced autonomous robots.

Proposals to address these issues have been to assign to the robots an “electronic” personality.

A Proposal

The EP’s report recommends the EU Commission to explore the implications of all possible legal solutions, including that of creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of indemnifying any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.

While this is a good idea, it might take time until it is applicable to all robots as for a robot to have the status of an “electronic person” its autonomous capabilities would need to be particularly enhanced.

Imagining a liability regime where liability would need to be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, taking into account which kind of development the robot has had, which kind of instructions or “education”.

However, it would not be always easy to discern skills resulting from ‘education’ given to a robot from skills depending strictly on its self-learning abilities.  This implies that when trying to identify responsibility, there would be huge grey areas.

A middle-level solution is needed for those situations where a robot is capable of autonomous learning and decisions but apt only to specific uses and not yet sophisticated to the point of being endowed with the status of electronic person, such as might be an autonomous car.

I believe instead that one possible solution to this could be provide each AI a legal personality akin to that currently afforded to corporations.

The benefit of this would be:

– registration/incorporation of the robot

– a head of responsibility, with specific rules and an entity to be considered in terms of liability and insurance

– ability to enter into contracts with each other and with humans with specific responsibilities arising out of the breach of such contracts.

One downside of this is that this type of legal status still requires an owner (a “shareholder”) with limited liability, and this means that the ultimate responsibility, although limited, would not necessarily be placed on the manufacturer, but on the owner, thereby returning to the position of an insufficient protection. However, for example in the case of autonomous cars, the owner of the car could be considered as the holder of the legal entity, with limited liability, having an obligation to ensure the vehicle.

Clearly, the topic still needs to be explored and possible solutions will evolve with time as practical problems arise and AI develops, but I believe that at this time this might be the best solution to put forward to address current concerns related to AI as we know them and understand them.  Ultimately, perhaps, it will be AI itself to propose a solution.

[Stefania Lucchetti was also quoted on her views on AI in https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/]

156-KWM-MILANO

 

© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Cross Border M&A; when to opt for a minority stake in a cross border joint venture

Whether your company has engaged in successful joint venture activities for years, or it is new to joint ventures, there is always an element of uncertainty when deciding to enter into a cross border joint venture, whether the objective is to expand reach and distribution of the company’s products and services in highly developed countries, or in emerging markets.

Whatever the key strategic purpose your company wants to achieve, there are two options for entering into a joint venture – which create different outcomes and specific governance issues. Whether the joint venture is established through an acquisition of an existing company or the set up of a joint venture vehicle, your company may either opt for a majority stake or a minority stake.

The decision to opt for a minority stake may be driven by various factors, including the power relationship with the JV partner.

In emerging markets, this choice is often driven by two key considerations:

Regulatory Constraints – Regulatory constraints in specific markets may cause the foreign investment to be restricted to minority investment levels.

Commercial Credibility – Accepting a minority stake may also reflect the need and strategy to enter the market with a credible local JV partner which has already established scale and reputation. This brings along an advantage where the JV partner is operating solidly and effectively in the emerging market environment, and has established government and public policy relations.

Aside from the above, this strategy may be useful or necessary for pure strategic purposes, where the JV partner has the commercial lead in the JV for example because it has proprietary technology, key products or client base/distribution platform which your company heavily relies on.

When deciding to enter into a JV where your company will hold a minority stake, the key point to be considered is that this structure requires a greater preparedness for your company to rely more heavily on the JV partner’s capacity to lead the joint venture and achieve common objectives.

In this circumstance, one of the key issues to be addressed is the establishment of minority protections both at the shareholders meeting and board level. This needs to be done by carefully negotiating and drafting a shareholders’ agreement and ancillary documents which include such protections, in order to achieve a governance structure that balances the powers of the JV partners to achieve the desired objectives.

© Stefania Lucchetti .   This note does not purport to give legal advice. For further information or advice tailored to your situation Contact Usstefania march 2019

Articles may be shared and/or reproduced only in their entirety and with full credit/citation. 

Italian law gives legal value to blockchain and smart contracts

[Versione italiana]

A leap forward has been done by Italian Law No 12/2019 (the “Law”) – published on 11 February 2019 – which completed the conversion procedure of Law Decree No 135/2018, better known as Decreto Semplificazioni. The Law introduces a definition of Distributed Ledger Technologies and Smart Contracts and sets out the legal effects deriving from the adoption of such technologies.

Distributed registers-based technologies (or DLT), including blockchain, are defined by the Law as “technologies and information protocols that use a shared, distributed, replicable, simultaneously accessible, architecturally decentralized registry on a cryptographic basis, such as to allow registration, validation, updating and archiving of data, both in clear and further protected by cryptography, that are verifiable by each participant, are not alterable and not modifiable”.

The Law further sets out which legal effects arise from the adoption of such technologies by stating that that storing a digital document in a DLT shall produce the legal effects of an “electronic time stamp” under Article 41 of Regulation (EU) No 910/2014 on electronic identification (so called eIDAS Regulation), which reads that “an electronic time stamp shall not be denied legal effect and admissibility as evidence in legal proceedings solely on the grounds that it is in an electronic form or that it does not meet the requirements of the qualified electronic time stamp”.

Through this reference to the eIDAS regulation, digital documents stored in a distributed ledger technology may be more widely used as evidence in legal proceedings allowing the technology to be put to use as proof in those circumstances where it is fundamental to have proof of date and time of a certain activity.

The Law goes on to define smart contracts as computer programs that operate on distributed registers-based technologies and whose execution automatically binds two or more parties according to the effects predefined by said parties”.  It is also established that smart contracts satisfy the requirement of written form, which is set out by Italian law for certain types of transactions and contracts.

The public agency Agenzia per l’Italia Digitale (AgID) will have to lay down  the technical standards that distributed ledger technologies will have to meet to produce the legal effects described above within 90 days from entry into force of the Law.

cropped-stefania-march-2019.jpg

 

Stefania Lucchetti

 

Data Protection: the EU Commission recognizes Japan as having adequate levels of data protection

On 23 January 2019, the EU commission announced its decision that Japan ensures adequate levels of data protections – as a consequence, it is now possible to freely transfer personal data from the EU (and the EEA) to Japan without the application of any further requirements.

Transfers of personal data originating from the European Economic Area (“EEA”) to third countries are regulated by the General Data Protection Regulation 2016/679) (“GDPR”) .

Non EU countries may be recognised by the Commission following a specific procedure as offering equivalent data protection to the GDPR. At this time only a selected few countries have been recognised as such.

This kind of recognition means that the recognized country is deemed to be equivalent to an EU Member State in relation to personal data transferred to them and therefore cross border transfer of personal data to such country is no longer subject to the requirements of Articles 46 and following of the GDPR (such as the adoption of binding corporate rules or the execution of the standard contractual clauses adopted by the EU Commission) that applicable for non EU countries which have not obtained such recognition.

Recognition is done through the adoption of an “adequacy decision” taken by the EU Commission on the basis of Articles 45 and 93 of the GDPR. The decision acknowledges that a third country, a territory or specified sectors within a third country ensures an adequate level of data protection. (The adequacy decision has relevance for the entire European Economic Area (“EEA”), which means that the three EEA Member States (Iceland, Liechtenstein and Norway) are also bound by the adequacy decision – on the basis of the Joint Committee Decision (JCD) adopted on 6 July 2018 which incorporates the GDPR into Annex XI of the European Economic Area Agreement).

EU Data Centers and Cross Border Transfers of Personal Data

In the wake of no-deal Brexit headaches, a number of international groups ask for advice on cross border transfers and how to make the best decisions when establishing a data center.

If a company locates its data center in a EU country (ie in perspective, not in the UK), the flow of personal data from the EU to UK (which will be considered after Brexit a “third country”) will be authorized only in presence of an adequacy decision of the European Commission or in presence of other safeguards.

The EU Commission at the moment has stated that if it will deem the UK’s level of personal data protection essentially equivalent to that of the EU, it will make an adequacy decision allowing the transfer of personal data to the UK without restrictions. However, the European Commission has not yet indicated a timetable for this and it also stated that the decision on adequacy cannot be taken until UK is a third country.

If the European Commission does not make an adequacy decision regarding the UK before or at the moment of exit, a legal basis for transfers from EU to UK must be identified. In this respect it must be noted that European Commission has not yet released the new standard contractual clauses (the clauses released under Directive 95/46 can however still be used) and that the binding corporate rules (“BCR”) must be approved by the competent authority and this approval may take some time.

These two instruments (standard contractual clauses and binding corporate rules), which are the most used for cross border transfers, are different and must be used in different contexts, so the specific situation must be assessed.

Setting up a data center in a EU country rather than in the UK (e.g. in Italy) could have some advantages – the most appropriate instrument for cross border data transfers will then have to be assessed.

Milan, 23 January 2019

This note is for information purposes only and it is not to be intended as legal advice. For any further information or to receive advice tailored to your situation, please contact us.

cropped-156-KWM-MILANO-1.jpg

 

Stefania Lucchetti

Oneline Education and Application of EU GDPR

Online Education

In present days, education programs, especially while provided at university and post-degree level, are increasingly more international.

Universities, business schools and other education institutions are now frequently offering masters and other study programs all over the world, not necessarily having schools and premises in every country where courses are offered.

Often, education is in fact provided partially or solely online, through distance learning programs. This is a huge opportunity for students to have access to international programs without having to relocate and for education institutions to expand their reach.

Applying for a distance learning education program implies that the prospective student provides the education institution with personal information concerning him or her. A huge quantity of personal data are therefore processed in this context (e.g. name, address, email, phone number, academic history, etc.), which raises the question of which regulation applies to the protection of such personal data, and in particular, for our purposes, in which cases the European Regulation 2016/679 (General Data Protection Regulation – “GDPR”) applies.

Territorial scope of the GDPR

The scope of territorial application of the GDPR is set out in Article 3 which provides that the regulation applies:

  1. to the processing of personal data in the context of the activities of an establishment of the controller or of the processor in the European Union, regardless of whether the processing takes place in the European Union or not; and
  2. to the processing of personal data of data subjects who are in the European Union by a controller or a processor not established in the European Union, where the processing activities are related to:
  • the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the European Union; or
  • the monitoring of their behaviour as far as their behaviour takes place within the European Union.

The terms in bold are clarified below:

  • controller” is the subject determining the purposes and means of the processing of personal data while the “processor” is the subject processing personal data on behalf of the controller;
  • establishment” implies the effective and real exercise of activity through stable arrangements (g. branches or subsidiaries);
  • “who are in the EU” it will be up to future case law to interpret its scope, however we can reasonably foresee interpretation as residency or domicile of a data subject in the EU;
  • offering goods or services” is more than mere access to a website or email address, but might be evidenced: by the use of a language or of a currency generally used in a EU Member State with the possibility of ordering goods/services there; by the use of advertising targeting an audience in the EU (for instance paying a search engine to facilitate access by those within a EU Member State); by the use of a top-level domain name other than that of the state in which the company is established (g. xxxx.it or xxxx.eu), etc.;
  • monitoring” specifically includes the tracking of individuals online to create profiles, including where this is used to take decisions to analyse/predict personal preferences, behaviours and attitudes or to provide online behavioural based advertising.

Examples of possible application of the GDPR

In light of the territorial scope of the GDPR, herein below few examples of possible application or non-application of the GDPR to education institutions processing personal data possibly also through distance learning systems.

Examples

 

Application of the GDPR to data processing carried out by the organization

 

Italian university providing courses in Italy, also online, both to EU and non-EU students

 

Yes
UK university providing summer courses in the premises of a local academic institution in France both to EU and non-EU students

 

Yes
Chinese university providing courses in its premises in China also to EU students

 

No
Chinese university providing online courses also to students resident in the EU

 

Yes
Chinese school providing language courses in premises located in Germany to German and other EU students

 

Yes
US university providing online masters also to EU students resident in the EU

 

Yes
Australian business school providing online MBA to Chinese students No
 

US online education platform processing data of EU students for profiling purposes

 

Yes

 GDPR compliance program

In order to comply with the GDPR, should it be applicable, education institutions will need to take numerous steps. The aim of this short paper is not to provide an exhaustive checklist of all the controller’s GDPR compliance activities, but to raise awareness as to the activities required, which can be summarized as follows:

  • designating people in charge for addressing privacy matters within the organization;
  • designating a Data Protection Officer (DPO), while requested under Article 37 of the GDPR (g. the processing is carried out by a public body or the processing operations require regular and systematic monitoring of data subjects on a large scale) or while considered useful by the organization;
  • drafting an adequate set of privacy policies on the basis of the different processing activities and of the different data subjects (g. resident students, foreign students, clients and suppliers, etc.);
  • defining data retention periods for each processing purpose;
  • collecting from data subjects the consent to the processing of their personal data while there are no other possible/appropriate legal basis for processing (g. a contractual obligation, a legitimate interest, etc.) – the consent of the student’s parents is necessary if the student is below the age of 16 years;
  • preparing and constantly updating a record of processing activities (necessary, under Article 30 of the GDPR, only in case the education organization employs more than 250 persons);
  • implementing appropriate technical and organisational measures to ensure a level of security appropriate to the risk;
  • carrying out an assessment of the impact of the envisaged processing operations on the protection of personal data, should Article 35 of the GDPR be applicable (g. in the event that the systematic evaluation of personal aspects based on automated processing, including profiling);
  • preparing training programs for the organization’s employees involved in processing operations;
  • creating a procedure for data breach management;
  • drafting data controller/data processor agreements where processing is carried out on behalf of the education institution by a data processor;
  • adopting appropriate safeguards for transferring personal data outside the EU, as provided by Article 46 of the GDPR.

Additional obligations provided by national law

The GDPR is directly applicable in all EU Member States, however the national laws of each EU Member State may provide for specifications and restrictions of European rules.

For example, specifically with regard to the matter at hand, Italian law on data protection (so called “Privacy Code”, Legislative Decree No. 196/2003), as recently amended by  Legislative Decree No. 101/2018 aimed at harmonizing Italian law with the GDPR, provides the following specific rule on the processing of students’ personal data: in order to facilitate education and the access to employment, also abroad, national education institutions, including private schools and universities, may – upon students’ explicit requests – communicate to third parties, also online, students’ data relating to marks and education results and other personal data, excluding however special categories of data (e.g. data concerning health, political opinions, religious beliefs, etc.) and data relating to criminal convictions.

The above is in any event subject to: (a) the education institution having provided an adequate information notice to the student; and (b) data being processed exclusively for the purposes of facilitating education and the access to employment.

Conclusions

Personal data collected and processed by a university, a school or by any other education institution in the context of its learning programs represent valuable assets: as such, they need to be carefully protected.

A compliance program to the GDPR is certainly a quite substantial commitment for European organizations and for foreign organizations which are subject to the new rules, however these subjects need to be mindful that the business and legal implications deriving from non-compliance with applicable rules may lead to substantial sanctions and to reputational damages.

Milan, 17 September 2018

This note is for information purposes only and it is not to be intended as legal advice. For any further information or to receive advice tailored to your situation, please contact us.

cropped-foto-stefania-sito-web-3.jpg Stefania Lucchetti  foto pietroPietro Boccaccini 

EU GDPR-CONSENT TO PROCESS PERSONAL DATA

https://www.kwm.com/en/it/knowledge/insights/eu-gdpr-consent-to-process-personal-data-20180801

EU companies – and non-EU companies offering goods or services to EU citizens – which process personal data need to comply with the provisions introduced by the European Regulation 2016/279 (General Data Protection Regulation – “GDPR”) in this respect. Consent of the data subject is a legal basis for data processing but not the only one, and companies will therefore need to carefully evaluate which is the most appropriate legal basis in relation to a certain processing activity.

This note focuses on consent, and in particular consent requirements as set forth   by the GDPR which are numerous.

A key business issue for companies whose data base is a valuable business asset is whether consent to process data obtained before the GDPR became applicable is still a valid ground to process data eg for marketing purposes.  This note will address this issue as well.

Consent as a legal basis for data processing

The GDPR has introduced new requirements in relation to one of the most used basis for lawfully processing personal data: data subject’s consent.

It shall be preliminary noted that, pursuant to Article 6 of the GDPR, processing of personal data is lawful not only if the data subject has given consent to the processing of his or her personal data for one or more specific purposes but also in the event that processing is necessary:

  • for the performance of a contract to which the data subject is party;
  • for compliance with a legal obligation to which the controller[1] is subject;
  • in order to protect the vital interests of the data subject;
  • for the performance of a task carried out in the public interest;
  • for the purposes of the legitimate interests pursued by the controller.

Before starting any activity that involve processing of personal data, a controller must consider what would be the appropriate lawful ground for the envisaged processing. In general, consent can be an appropriate lawful basis if a data subject is offered the possibility to freely accept or refuse the terms offered.

Consent obtained before GDPR became applicable

According to Recital 171 of the GDPR “where processing is based on consent pursuant to Directive 95/46/EC, it is not necessary for the data subject to give his or her consent again if the manner in which the consent has been given is in line with the conditions of this Regulation, so as to allow the controller to continue such processing after the date of application of this Regulation”.

In the light of the above, in the event that a company, prior to 25 May 2018 (the date in which the GDPR became applicable), obtained the consent of certain data subjects as requested by the GDPR, it can continue to lawfully process personal data of those data subject. Should that not be the case, the company will need to obtain new consent.

If not obtained in full compliance with the GDPR, consent is an invalid basis for processing, rendering the processing activity unlawful. If, for instance, a company collected only one consent for different processing operations (which is quite common, in practice), this would not be in line with the “granularity” requirement (see paragraph below on this topic).

As it has been outlined by Article 29 Working Party[2], the consent given before the GDPR became applicable by implied form of action is no longer valid, given that the GDPR requires that the consent is given through a “statement or a clear affirmative action” by the data subject. Therefore, for example, consent obtained with a pre-ticked opt-in box would not be valid.

In order to be compliant with the GDPR’s standards, also operations and IT systems may need revision. For instance, mechanisms for data subjects to easily withdraw their consent must now always be available. If existing procedures for managing the obtainment and withdrawal of consent do not meet the GDPR’s standards, controllers will need to refresh their procedures.

In any event, obtaining consent does not diminish the controller’s obligations to observe the principles of processing enshrined in the GDPR, especially with regard to fairness, necessity and proportionality, as well as data quality.

Herein below are the main requirements of consent set forth by the GDPR that companies will need to carefully examine in order to evaluate if existing consents (if any) need to be refreshed.

Consent requirements

Consent must be given by a clear affirmative act establishing a:

  • freely given;
  • specific;
  • informed; and
  • unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her.

Where processing is based on consent, the controller must always be able to demonstrate that the data subject has consented to data processing.

Consent should not be considered as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment (withdrawing consent, for instance, must not lead to any costs for the data subject). Consent would not be considered freely given in the event that a certain service required by the subject is subject, for instance, to the subject’s consent to receive direct marketing.

It is interesting to note that in certain relationships that cannot be considered perfectly balanced, like the one between the employer and the employee, it is unlikely that the consent requested to the weakest party will be freely given. In this particular case it is advisable to make recourse to other legal basis for the processing (e.g. the performance of the employment contract and compliance with employer’s legal and fiscal obligations).

For consent to be informed, the data subject should be aware of[3]

  • the identity of the controller;
  • the purposes of the processing for which the personal data are intended;
  • what type of data will be collected and used;
  • the existence of the right to withdraw consent;
  • information about the use of the data for automated decision-making (if relevant);
  • the possible risks of data transfers outside the EU due to absence of an adequacy decision and of appropriate safeguards.

In addition, consent must have a further requirement – i.e. it must be explicit – in the event a data controller is willing:

  • to process special categories of personal data (e.g. data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, data concerning health, etc.); or
  • to process personal data for profiling purposes.

The consent, in order to be explicit, must be in written form, including by electronic means, for instance by filling in an electronic form, by sending an email or by using an electronic signature. The use of pre-ticked opt-in boxes is invalid under the GDPR. Silence or inactivity on the part of the data subject cannot be considered as an indication of choice.

Another specific requirement related to consent introduced by the GDPR is that in relation to the offer of information society services to children below the age of 16 years, the consent of the holder parental responsibility  over the child must be given. EU Member States may provide by law for a lower age, provided that such lower age is not below 13 years.

In the event that consent is given in the context of a written declaration which also concerns other matters, the request for consent must be presented using a clear and plain language (meaning that it should be easily understandable for the average person and not only for lawyers) and in a manner which is:

  • clearly distinguishable from the other matters; and
  • in an intelligible and easily accessible form.

Data subjects have the right to withdraw their consent at any time and data controller must inform them of that. Withdrawing consent must be as easy as giving consent (e.g. clicking a box online). The withdrawal of consent, in any event, does not affect the lawfulness of processing based on consent before its withdrawal.

It shall be noted that the controller cannot swap from consent to other lawful bases. For example, it is not allowed to retrospectively make recourse to the legitimate interest basis in order to justify processing, in case consent is not valid anymore. A data controller must decide before starting data collection what is the applicable lawful basis and must disclose it to the data subject at the time of collection.

Granularity of consent

Recital 43 of the GDPR states that separate consent for different processing operations will be needed wherever appropriate. Mechanisms to collect consent must be granular to satisfy, in particular, two requirements: “free” and “specific”. Granularity of consent means, in few words, that it must be clear to the data subjects what they are consenting to: they must have a choice and be in control of what they choose to receive from data controller. Bundling up consent to various activities into one tick box is not acceptable.

Although the processing of personal data for direct marketing purposes may be regarded as carried out for a legitimate interest (recital 47 of the GDPR) – in particular in presence of a contractual relation between data controller and data subject[4] – in most cases a data controller who intends to process personal data for marketing purposes will need to obtain a specific consent from the data subjects.

A controller that seeks consent for various different purposes should provide a separate opt-in for each purpose, to allow users to give specific consent for specific purposes.

For instance, specific and separate consent should be requested from data subject for:

  • data controller processing personal data for sending newsletters and commercial communications with the purpose of direct marketing (via email, sms, mms, fax, mail, phone, etc.);
  • data controller processing personal data with the purpose of profiling data subject and sending personalized offers;
  • data controller transferring personal data of the data subject to third parties for having them sending newsletters and commercial communications with the purpose of direct marketing;
  • data controller transferring personal data of the data subject to thirdparties for having them profiling data subject and sending personalized offers.

Pursuant to Article 21 of the GDPR, where personal data are processed for direct marketing purposes, the data subject has the right to object at any time to processing of personal data concerning him or her for such marketing, which includes profiling to the extent that it is related to such direct marketing. In the event that the data subject objects to processing for direct marketing purposes, the data controller must no longer process personal data for such purposes.

Data portability

One of the consequences of basing the processing on consent is – among others – that the data subject acquires the right to data portability set forth by Article 20 of the GDPR, that is to say the right to receive his/her personal data provided to the controller in a structured, commonly used and machine-readable format.

At data subject’s discretion, where technically feasible, the data controller who originally collected personal data would have to transmit the data directly to another controller.

Needless to say, the exercise of this right may significantly impact the business of a company based on the commercial use of its customers’ data.

Milan, 17 July 2018


This note is for information purposes only and it is not to be intended as legal advice. For any further information or to receive advice tailored to your situation, please contact us.

[1]The “controller” is the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data.

[2]Guidelines on Consent under Regulation 2016/679 adopted on 28 November 2017, page 30. Article 29 Working Party was the advisory body made up of a representative from the data protection authority of each EU Member State, the European Data Protection Supervisor and the European Commission. On 25 May 2018, it has been replaced by the European Data Protection Board (EDPB).

[3]As precised by Article 29 Working Party – Guidelines on Consent.

[4]For example, a data controller sends e-mail communications to existing clients in order to promote the data controller’s own or similar products or services (see Opinion 15/2011 of Article 29 Working Party on the definition of consent).

cropped-foto-stefania-sito-web-3.jpg Stefania Lucchetti  foto pietroPietro Boccaccini 

Artificial Intelligence and Legal Personality

[“In a scenario where an algorithm can take autonomous decision, then who should be responsible for these decisions?” Milan-based corporate lawyer Stefania Lucchetti said]. My interview in Politico’s article on the introduction of a concept of legal personality for artificial intelligence. This conversation has come of age, and while we do not yet have all answers it is very important to start asking the right questions.

Read the article at: https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/

Data Protection as a Corporate Governance Issue

 

 

Today we held a round table and seminar at our King & Wood Mallesons office dedicated to data protection during which we discussed the implications of the GDPR from a practical point of view both from the legal side and the technical side.  Aside from the obvious duty to be compliant, my view is that an appropriate data protection structure and responsibility line is not just an IT issue or a legal issue but a it is a corporate governance issue, as it entails serious risk management considerations both from a financial perspective as well as a reputational perspective and therefore each company needs to deploy sufficient investments to ensure adequate compliance.

Boards need to make an essential philosophical switch in accepting that this is a key enterprise risk which needs to be addressed at a board level with adequate resources.

Lack of a proper action can entail heavy sanctions for the company in accordance with the GDPR, with ensuing board responsibilities towards the company (for example in Italy under Art. 2392 of the Italian Civil Code for lack of appropriate action to protect the company).