23 Nov 2018

Approaches to ethics in AI: more alike than apart?

There's been considerable interest recently in the ethical principles required to ensure that artificial intelligence (AI) is adopted sustainably and that we, as a global society, derive the greatest benefit from innovation in this area.

shutterstock
Elnur / Shutterstock.com

 

Guidance has been published internationally by public bodies, and the private sector has added its views to the debate. Among those contributing to the discussion are the European Commission's Group on Ethics in Science and New Technologies; the UK's House of Lords Select Committee on Artificial Intelligence; the Japanese Society for Artificial Intelligence; and Google. I've spent time looking at the statements of these and other organisations to get an appreciation of levels of consensus, and in this article I'll share some of my views.

 

Approaches to regulation

Although even the most advanced forms of “AI” are currently limited to solving particular discrete problems (“narrow” or "weak" AI), the perspectives in the published guidance are very aspirational. This contrasts with the approach to regulation which, right now, is measured. There's an emerging consensus that it would be premature to regulate AI in general terms right now, and it's suggested that a sector-oriented approach to regulation may be an appropriate route to take as new use cases unfold.

 

Transparency

Most people will have preconceived ideas of what artificial intelligence is, shaped by media and popular culture. Misconceptions can be dangerous, and if AI comes to be perceived as a threat then public trust will be eroded and adoption restricted. We've seen this happen before, rightly or wrongly, with GM foods being an obvious example. 

So it's hardly surprising to find in the discussion a common emphasis on transparency. The first requirement for this is for the user to know she is dealing with an AI. How and when this information is best communicated is something that providers will need to consider in light of existing regulation, and what is fair and reasonable to protect the interests of users.
Using technology in a deceptive way for commercial gain is at best morally questionable, and could lead to criminal charges. But this type of behaviour is not unheard of. In 2016, Canadian dating site Ashley Madison admitted using fake female profiles operated by “chatbots" to persuade male users to pay for the ability to respond to messages from non-existent women. It was alleged that 80% of initial purchases were to respond to messages written by machines. A US Federal Trade Commission investigation was launched, leading to a $1.6 million settlement.

There is a consensus that providers of AI solutions need to clearly and truthfully communicate both the benefits of AI, and its shortcomings, so that society can make informed choices.

 

Highlights from the March 2018 statement of the European Commission's Group on Ethics in Science and New Technologies:

  • Human dignity is paramount—limits should be set on: (i) misleading individuals to believe they are interacting with a human; and (ii) decisions being made about them autonomously.

  • People must be free to set their own norms and standards and live in accordance with them. Humans must be able to intervene in AI operation, and determine how much decisions and actions are delegated.

  • Autonomous systems should be used to serve the global social and environmental good, as determined through democratic processes. AI must be aligned to human values.

  • Systems should be 'safe' as regards physical safety of users and their environment, the emotional and mental wellbeing of users, and the reliability and robustness of the system. 

  • The rule of law must be upheld and liability for risks arising from the use of autonomous systems must be allocated, for when the necessary harm mitigation strategies are not fully effective.

  • The benefits provided by AI should be fairly distributed, and discriminatory biases in data used to train systems should be prevented, or neutralised and reported, as early as possible.

  • Decisions about how autonomous systems should be regulated should result from public engagement and democratic debate. The use of AI must not subvert or jeopardise value diversity.

 

Dealing with the consequences

So far no credible answers have been proposed to the big questions about accountability for the acts and omissions of general AIs in the future. In the current context of narrow AI this seems sensible. There are already effective mechanisms for apportioning risk and liability that still work for most (if not all) current use cases and, right now, more emphasis should be placed on human control to give accountability, and strategies for mitigating risks and the potential for harm.

That said, as AI becomes more sophisticated, there may come a time when human control or intervention becomes less effective or feasible. Taking autonomous vehicles as an example, it would defeat the object (and potentially be unsafe) if the AI driver had to ask a human operator – in the vehicle or elsewhere – for input when a safety critical decision is required. We may instead have to ensure that whatever decision-making process is followed is aligned to our ethical and legal norms, and that the result is at least no worse than if a reasonably competent and diligent human were in the driving seat. If the public is to accept the consequences of those decisions, it will become increasingly important for society to understand, and have the choice whether to accept, the risks of AI autonomy.

 

AI in the UK: ready, willing and able? Five principles from the UK's House of Lords Select Committee on Artificial Intelligence:

  • AI should be developed for the common good and benefit of humanity.
  • AI should operate on principles of intelligibility and fairness.
  • AI should not be used to diminish the data rights or privacy of individuals, families or communities.
  • All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

 

Global technology, local values

Human rights, and cultural, social, and legal norms, are at the very heart of the guidance I've reviewed. However there is no such thing as a single set of universal values, so a system designed in China may take a different approach, and give a different outcome, to a system designed in the US, the Middle East, or in Europe. What's more, moral values, whether those of an individual or commonly held by a group of people, are not static and change over time. It would be dangerous to assume that we can build a set of guiding moral principles into a system and expect that it will always give results that meet with everybody's approval.

Whilst it's an attractive proposition, attempting to formulate a single set of guiding moral principles for AI to apply across all communities is likely to fall short of such lofty ambitions. We only have to look at the diversity of legal frameworks that have developed to address other ethical questions (such as gun control, abortion, and data privacy) to realise that this may be asking the impossible.

But, given the connected nature of the world today, it will be vital for the principles adopted to be supportive and aligned around the globe. The risk of adverse impacts from the deployment of AI is highest in those grey areas where different cultural perspectives interact, and the success or failure of any ethical framework that gets established will become apparent at those interfaces.

Because behaviour that would be appropriate in one culture may be frowned upon (or even criminal) in another, developers must be prepared to account for such differences in the design phase, and avoid simply seeking to impose their own standards. Legislators can support developers by working together to develop ethical-legal frameworks which are harmonised, at least to the extent possible, and by giving clear guidance on how to deploy AI within their jurisdictions.

 

Key ethical principles for members from the Japanese Society for Artificial Intelligence (2017):

  • Members should contribute to the peace, safety, welfare, and public interest of humanity, and share the benefits of AI in a fair and equal manner.
  • Members should protect human rights, including the right to privacy.
  • Members should respect cultural diversity, and avoid inequality and discrimination.
  • Members should comply with law and legal obligations.
  • Members recognise the need for AI to be safe and under human control.
  • Members should act with integrity, deliver transparency and engender trust, and explain the technical limitations of AI systems truthfully and in a scientifically sound manner.
  • Members should take steps to prevent the malicious use of AI technologies.

 

Qualified consensus

The views expressed in the guidance seem broadly aligned, focusing on many of the same areas and proposing similar principles. What is less clear however, is how those broad principles may be interpreted by people from different backgrounds, with different cultural and legal traditions.

Human ethics and their application to particular scenarios reflect the diversity of humanity itself, and we must be careful not to make assumptions based on our own points of view. What is certain however, is that we will continue to see interesting ethical questions being raised as technology progresses.

 

Highlights from Google's seven principles:

  • AI should be socially beneficial, weighing up the likely benefits against the foreseeable risks, and respecting cultural, social, and legal norms in the countries in which they operate.
  • AI should avoid creating or reinforcing unfair bias.
  • AI should be built and tested for safety, using strong safety and security practices to avoid unintended results that create risks of harm.
  • AI should be accountable to people, giving appropriate opportunities for human intervention, and be subject to human direction and control.
  • AI should incorporate privacy design principles.
  • AI should uphold high standards of scientific excellence (and Google will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications).
  • AI should be made available for uses that accord with Google's principles, limiting potentially harmful or abusive applications in light of the AI's primary purpose and use, nature and uniqueness, scale, and the nature of Google’s involvement.

 


 

By Chris Eastham, Director and specialist technology lawyer at Fieldfisher LLP.

 

 


related topics