In the past few years, companies, government organizations and dedicated research institutions have attempted to create principles and guidelines for Artificial Intelligence (AI) in view of constructing and ethical framework for the development of AI. The most notable efforts are those of the EU Commission and most recently the OECD which put forward inclusivity, transparency, disclosure, respect for human rights and democratic principles, security and accountability. The next step will be the development of metrics and parameters. The difficulty with regulating AI is setting out rules which are practical and flexible enough to stay relevant. The rules also need to be coherent and consistent with other areas such as privacy and security. This however should not impair or diminish current efforts. We believe that while any principles and practical rules set out now will likely be outdated in a short time as the technology evolves they are nonetheless essential as they will form the the basis for future development and upgrading.
Software as a Service (SaaS) – Contractual Structure and Issues
The business model for Software as a Service (SaaS) requires careful drafting of the contracts in place between the software owner or provider, its end users and its distributors.