19 Sep 2016

Irresponsible AI

Recent experiments in AI have demonstrated how important emotions are to intelligence as well as highlighting how far there is to go. Talking with the head of Emoshape, a company that is creating Emotional Processing Units and examining attempts at implementing AI, Robotics Law Journal investigates.

By Tom Dent-Spargo

shutterstock

On 23rd March, Microsoft unleashed their artificial intelligence chatterbot Tay on the social media platform Twitter. Only 16 hours after its launch, Tay was taken down by Microsoft after it released a series of ever more inflammatory tweets. The tweets by Tay were made by mimicking other (human) people’s tweets, through the ‘repeat-after-me’ capability meant to mimic language patterns, as well as being programmed to sound like a 19-year old American girl. Twitter trolls caught on to the weakness immediately and exploited it with a coordinated attack of tweets Tay to repeat.

This experiment with AI has demonstrated how far the field has yet to develop. The almost random repeating and redistributing of words that Twitter users directed at Tay is the clearest indication of  the lack of intelligence there, that understanding of the meaning of the words was absent, even if the syntax was correct. Intelligence requires more than the mere ability to make tweets that, on their own make sense (even if  they are wildly politically incorrect), but when put together show that there is no real understanding driving the content.

The importance of emotions

Intelligence is impossible without emotions. Current state-of-the-art robots are considered vastly inferior in comparison to the abilities of  humans and animals in this field; they perform well only within their own programmed parameters. Work in AI is working towards building in the ability of  robots to experience emotions, with a predicted timeline of 2025-2029 for this to emerge.

Patrick Levy-Rosenthal is the CEO of  Emoshape, a project which is designing technology for robots to interact better with humans and advance AI to the point of  being called genuinely intelligent, often through the use of their emotion processing units (EPUs). Speaking about Tay at the Bio-Inspired Robotics Meetup, he said, ‘Deep learning so far can’t detect the true meaning behind words and what is safe.’ This limitation is what, in part, led to the rapidity of Tay’s failure. According to Rosenthal, emotions are vital for intelligence and that the development of emotions in animals is in fact related to survival.

Consider the example of meeting someone for the first time. Meeting someone new socially triggers all the cognitive parts of  your brain which are seeking to find out if this new person can increase your chances of survival. Built up over a period of a month, the emotional memory attached to that person means the next time you see them, you would know instinctively whether to move towards or away from them if you passed them in the street. An emotional response has driven the behaviour after the brain had computed and processed the information.

This example illustrates the importance of emotions. Without them, in that scenario, you wouldn’t have built up any pertinent information and you wouldn’t know whether to approach them the second time. This could have catastrophic consequences for your survival, at least in the evolutionary sense.

Building in emotions to robots is a significant part of  bio-robotics. In January this year, it was announced that Apple had bought Emotient, a startup that uses AI to analyse facial expressions and read emotions. Prior to being bought by the tech giant, Emotient’s technology had been used for advertisers to asses viewers’ reactions to ads; tested by doctors to interpret signs of  pain among patients that are unable to express it vocally; and a retailer has used it to monitor facial expressions on customers in the aisles of a store. Image recognition is one of  the most desired outcomes of AI investment, and could be a possible use for the technology, using emotions to reach this heightened level of intelligence.

Liability

Without emotions, AI can’t understand the meaning behind words. It can define a word easily, but the significance of that definition is absent. Missing this step means machines aren’t intelligent and therefore not responsible for their words or actions. It would be like blaming a child for saying something hateful; in fact, just like Tay, the child is likely echoing the words of users external to it heedlessly. Tay’s controversial statements could not be considered hate speech because of  the lack of understanding of the words. No liability can rest with Tay, perhaps it can with Microsoft — who promptly took actions to remove Tay from the inevitable trolling of Twitter users and shut down the platform. 

There are other examples of human biases creeping into technology. Google researchers in 2013 used a neural network on a body of  three million words, looking at the patterns in the way the words appeared next to each other. They then used vectors in a vector space to examine the complex relationships between the words, called it Word2vec. Similar words with similar meanings appeared on the vector space near to each other, the relationships between the words displayed in a vector equation: e.g. ‘man is to king as woman is to queen’ is captured by 'man : king :: woman : queen'.

In July of this year, Microsoft researchers have investigated the vector space and found that it is inherently sexist. Using a search query to find word embeddings, they asked it such queries as: ‘father : doctor :: mother : x’ (where ‘x’ is the desired answer). The vector responded that ‘x = nurse’. Give it, ‘man : programmer :: woman : x’ it came back with, ‘x = homemaker’.

Any bias in the articles that formed the corpus of words have been captured in the program the vector space. Any application that uses Word2vec — for machine translation, intelligent web searching or similar use — will inevitably carry over these biases. For instance, if the term, ‘scientist’ is more biased towards men than women in Word2vec, then a search for ‘scientist CVs’ or ‘scientist theories’ might rank men’s CVs/theories higher than women’s, amplifying the stereotype instead of merely reflecting them.

Robotics Law Journal has previously looked at Peter Asaro’s paper, ‘Will #BlackLivesMatter to RoboCop’ which looked at the potentially fatal consequences that AI learning human biases could manifest, and whether this was a distinct possibility. The gendering of Word2vec shows that it clearly is possible, and they have begun work on ‘de-biasing’ the vectors to minimise the damage that it could do. Working now to correct such issues is key, something that Apple has done by updating its Siri technology after gaining feedback about the lack of help in sourcing rape crisis centres.

Artificial Intelligence is not yet at the point it can be deemed responsible. Rosenthal says, ‘The work we are doing at Emoshape will change this radically. I guess by 2029 AI and robots could become responsible if they were able to appraise their own emotions.’ 

An AI saying or writing hate speech would be emergent behaviour, i.e. none of the constituent agents of the AI were programmed to do so and would, if anything, be proof that the AI project was successful. Identifying such potentially harmful emergent behaviour and moving to quickly redress the behaviour would be the appropriate response, meaning that the AI is still not held fully accountable.

For now, human operators will have to take the role of  responsibility, where it is relevant. ‘I think responsibility will come when a machine will say “no” to a human, not because it has been programmed to do so, but when its survival instinct will tell it to do it,’ says Rosenthal.

How an AI would be charged and what the consequences would be are the issues that need resolving in law at as early a date as possible. It is clear that in emerging technology fields the time to regulate is always now, rather than later, and especially where it could potentially involve criminality.


related topics