In the past few years, companies, government organizations and dedicated research institutions have attempted to create principles and guidelines for Artificial Intelligence (AI) in view of constructing and ethical framework for the development of AI.
At the government level, the European Commission established the High-Level Expert Group on AI, the OECD set up an OECD AI Policy Observatory and individual governments have set up specific committees and working groups.
The effort is valuable however it is not as obviously directional as it would seem. Assuming that there is an agreement that AI should be developed and used “ethically”, there are substantial differences between individuals, companies and governments as to what ethical is and – even when there is agreement on the substantial principles – how those principles should be interpreted and most of all implemented.
This derives from the global, encompassing nature of AI. Discussion of course is and should be ongoing. The debate should keep developing around both what constitutes ethical and reliable AI and also at a more practical level which requirements, technical standards and best practices are to be implemented for its realization.
The most notable recent efforts (other than those of the EU Commission already discussed in previous posts) are those of the OECD and of the G20.
OECD Principles on AI
On 22 May 2019 the OECD member countries adopted the OECD Council Recommendation on Artificial Intelligence setting out principles for the development of AI. The OECD member countries and six additional partner countries (Argentina, Brazil, Colombia, Costa Rica, Peru and Romania) have adhered to the AI Principles. The OECD Recommendations are not legally binding, though they are highly influential in setting regulatory frameworks for a wide range of areas.
The aim of the OECD principles is to promote AI development which is innovative, trustworthy and that respects human rights and democratic values. The principles revolve around inclusivity, respect of human values and democratic values, transparency, disclosure and security. Specifically:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency* and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
*(Transparency is intended as both disclosure of the existence of AI and interpretability ie the ability to understand the decisions of AI and the algorithms based on which those decisions are made)
- Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
- Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
- Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
- Empower people with the skills for AI and support workers for a fair transition.
- Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.
In June 2019, the G20 adopted AI Principles (publicly available on Japan’s MOFA) that draw from the OECD AI Principles.
The next step will therefore be the development of metrics and parameters.
The difficulty with regulating AI is setting out rules which are practical and flexible enough to stay relevant. The rules also need to be coherent and consistent with other areas such as privacy and security.
This however should not impair or diminish current efforts. We believe that while any principles and practical rules set out now will likely be outdated in a short time as the technology evolves they are nonetheless essential as they will form the the basis for future development and upgrading.
Stefania Lucchetti – Founder of Lucchetti-Law Crossborder
This note is for information only and is it is not to be considered legal advice. For further information Contact Us
Articles may be shared and/or reproduced only in their entirety and with full credit/citation.