Do the risks of AI derive from AI being more human than we think?

On 12 June, Magic Circle firm Slaughter and May jointly with ASI Data Science published a white paper (the “Paper”) titled Superhuman Resources – Responsible Deployment of AI in Business , dedicated to exploring the benefits, risks and potential vulnerabilities of AI.

The Paper is well written, well researched and very interesting and I strongly recommend to read it.

It offers some elements of history of AI and a clear and simple definition of much used (and abused) terms such as Static v dynamic machine learning, General v narrow AI, Supervised v unsupervised machine learning.  It defines AI as a “system that can simulate human cognitive processes” and also goes as far as to define the human brain as a “computer on a biological substrate”.

The Paper then goes on to identify the already well known potentials of AI – ability to synthesize large volumes of data, adaptability and scalability, autonomy, consistency and reliability, as well as the less known or acknowledged potential for creativity.  Machines have in fact in recent years shown a surprising ability to produce creative works in the area of art, writing and design. News writing bots have already made their appearance in news articles such as Washington Post’s AI journalist Heliograf and other works of AI produced visual and expressive art have been widely published.

The key message of the Paper however is that while commentary on and investments in AI and machine learning seem so far to have focused on the potential upsides, a critical point (according to the authors) is missing: that AI can “only ever be useful if it can be deployed responsibly and safely”. The Paper then goes on to identify, and analyse in some depth, 6 categories of risk, 1. Failure to perform, 2. Social Disruption; 3. Privacy; 4. Discrimination (ie data bias); 5. Vulnerability to misuse; and 6. Malicious Re-Purposing.

Although the analysis is excellent, I do not agree with the Paper’s claim that the risks of AI have so far been ignored. The risks of AI have on the contrary often been magnified, if not only in innumerable well constructed science fiction movies, and regulators are not oblivious to AI. As already mentioned in a recent post, the EU Parliament has already set out, albeit not with such rigorous scientific approach and structure, a number of legal issues which AI raises and will raise and that need to be address for reliability and safety (see my earlier post Why Robots Need a Legal Personality ).

What struck me most from the Paper however was however that in providing a structured and detailed list of potential risks and pitfalls of AI, it also highlights that the key issue underlying them all derives from the fact that automated processes, as the report itself says by quoting Ian Bogost [2015, The Cathedral of Computation] “carry an aura of objectivity and infallibility” – yet AI is not infallible. Specifically it isn’t if : the system fails (Failure to Perform); it handles data disregarding privacy laws; it makes decisions having been exposed to data which is limited or biased. AI also can create- like any innovation – social disruption, it could be manipulated to be used [by humans] for mean purposes, or it could be maliciously re-purposed in the wrong hands.

So we learn that AI is subject to system errors or application errors, may make wrong judgments if it is only exposed to limited data, and may be subject to manipulation.

In essence, the key risks of AI derive from the fact that AI behaves more like a human than humans would think or hope for!  But then, if AI is a “system that can simulate human cognitive processes” wouldn’t fallibility be an intrinsic characteristic of AI, only perhaps with a degree of fallibility lower than that of humans?

The Paper recommends that businesses “be forward thinking and responsible” in designing and deploying AI which by design has systems that mitigate or restrict potential negative effects, and in particular to set out procedures, such as risk assessment and risk register, monitoring and alerting, audit systems to determine causal processes and accountability frameworks for algorithms.

This is all very valuable and very important. I still believe however that it misses the key point, exciting and risky at the same time, and that is that by allowing a machine to develop cognitive processes (and I deliberately use the word “develop” as I am not sure if “simulate” would correctly represent machine learning or creative expression) a new intelligent entity is created, which at some point will, or might, develop to the point of not being limited to being an instrument to be used at the hands of humans in accordance to human instructions, but will, or at least might, be self directed.  As I have already pointed out in my other earlier post AI legal issues, one of the key issues relating to the next AI generation is that they will have the ability not just to operate on big data based on algorithms designed and built in by humans, but to create their own algorithms.

And this brings me back to my original point, which is that one of the key legal issues to be addressed is to determine in which cases and at which point of development an AI needs to be provided with a legal personality.

cropped-foto-stefania-sito-web-3.jpg© Stefania Lucchetti 2017.  For further information Contact the Author

Articles may be shared and/or reproduced only in their entirety and with full credit/citation.