26 Jul 2018

Instilling Ethics into AI

Artificial Intelligence (AI) is becoming ever-present, integrating with everyday technologies as well as assisting the operations of huge global businesses. In the future it promises to revolutionise areas such as healthcare, education, and transport. In becoming so widespread, there emerges a need to ensure that AI behaves in an ethical manner to minimise harm and maximise benefits.

By Tom Dent-Spargo

shutterstock
enzozo / Shutterstock.com


From the outset, AI almost looks like it should be automatically objective, with its cold computer logic so different from humans, but there exists the huge the potential for transferring human biases into the design of AI. Every action that a human takes is laden with values that have built up through an entire life’s worth of experiences. 

When designing AI, it is paramount that the designer is aware of any bias that may affect the workings of the intelligent system. This includes minimising unjustified bias while highlighting any  other biases that could be present to make the system more transparent. Being able to explain an AI’s behaviour comes through this transparency and is necessary for working out how to deal with incidents in the future when the actions of an AI need to be explained, when, for example, it causes harm. Furthermore, AI is also able to be leveraged to protect systems and directly minimise harm in a very tangible way, by augmenting cyber security, for example.

Rurik Bradbury, Dr Scott Zoldi, Doug Hargove, and Ray Chohan give their insights into how to make AI ethical and what some of the challenges are.


 

Bias

Rurik Bradbury, Global Head of Conversational Strategy at LivePerson.

 

Diversity Matters

The bias against women and minorities in AI is pervasive, and we’re at a critical juncture where programmes such as Carnegie Mellon’s will make a huge difference. Today, a narrow group of programmers – typically white, middle-class men – is creating a fundamental framework for AI that will impact the whole of humanity for decades to come. If we don’t work to educate future generations and make sure diverse voices are heard in this process, AI won’t fulfil its transformative potential, and we’ll face a much-prolonged era of sexist technology.
 
According to LivePerson research:
47% of UK women could name a famous man in tech, compared to only 4.8% who could name a famous woman. That’s essentially ten times less – and terrible news for potential female programmers seeking role models.
50.9% of UK respondents could name a famous male tech leader, but only 3.4% could name a female tech leader
 
The technology industry is a serial offender. Of the 20 largest US technology firms by revenue, 19 have male CEOs. A reassuring element of the recent UK House of Lords report was the acknowledgement of the need to prioritise socio-economic and gender diversity in emerging technologies.
 
 

AI & unconscious bias

Sexism in tech doesn’t necessarily start in the boardroom; it is infiltrating our daily lives as AI becomes ubiquitous. Every major home assistant — including Siri, Alexa, Google Home, and Cortana — speaks by default with a female voice. These are the bots with obedient personalities; the chipper personal assistants that obey your every whim. Have you ever thought about why these main AI assistants are all female by default? 62% of the UK public haven’t, according to new LivePerson research. It underscores just how normalised society’s gender bias is, and that we are programming it into everyday tech. Courses like Carnegie Mellon’s can open the eyes of the public to this creeping bias.
 
 

How to collect data – “garbage in, garbage out”

We need to be hyper-aware of how data is collected by machines, to avoid programming prejudices into technology set to impact and influence the lives of millions. Because machines can’t point out bias like humans, they rely on us to do it for them by providing them with carefully chosen data. If we put a bot into a sexist environment to learn, we inevitably end up with a sexist bot; “garbage in, garbage out” has never been more appropriate!
 
 

AI will impact the employment market – and we need to be ready and make retraining accessible to all

The government needs an action plan to identify, agree upon and outline the measures it will take to support a mass-retraining initiative akin to the Marshall Plan. It needs to be big, bold, and pre-emptive, partnering with major employers to advertise training and education programs, and make sure staff are aware of them, as the number of open positions at those employers declines due to automation.
 
While initiatives like this are imperfect – they almost always over- or under-shoot their targets because so much guesswork and chance is involved – it will at least soften the pain of impact.
 
Technology companies and entrepreneurs can also plan ahead, by creating software platforms that rely upon human input and labour in areas where AI is less applicable: things like creativity, empathy and other uniquely human capabilities.


 

Explainability

Dr Scott Zoldi, Chief Analytics Officer at analytic software company FICO.

 

In a conversation I had last month with Garry Kasparov, he scoffed at the term “ethical AI” saying that AI is neither ethical nor unethical – it’s the people using it that determine the ethics. He has a valid point, particularly in a climate where many people are trying to demonise AI. However, there are some things data scientists need to do to ensure that their work with AI doesn’t have unethical or irresponsible ramifications, particularly when AI is implied to make decisions about access to services such as credit or healthcare. Data scientists need to control for bias, and ensure causal relationships. 
 
Explainable AI is an important part of this, as the machine learning model can expose the latent features where all the relationships between variables can be understood and validated to explore for bias and relationships driving prediction. This is an area where I have placed a good deal of our AI focus at FICO, and it will become a more acute challenge because of GDPR, states that if machine learning is going to be used to make an automated decision, its reasoning must be explained. Explainable AI is a field of science that attempts to remove the perception of AI as a “black box” technology.
 
FICO has been pioneering Explainable AI for over 25 years, and we have found various ways of introducing explainability into machine learning and AI models:
 

  • Scoring algorithms that inject noise and score additional data points around an actual data record being computed, to observe which data variables are impacting the score or decision the most.
  • Models that are built to express interpretability on top of inputs of the AI model. 
  • Models that change the entire form of the AI to make the latent features exposable.

 

Empathy and Accountability

Doug Hargove, Managing Director, Legal at business software and services provider Advanced.

 

Empathy

There is an argument for and against certain job roles (like that of a judge) requiring empathy – and this wouldn’t be achievable through AI alone. However, AI can and does remove any subconscious bias that individuals may apply to making decisions and therefore removes the emotional angle.
 
What becomes very important is that AI is not the finished article – it is just the beginning. Humans need to have the absolute override and we must all keep in mind that the more tasks that AI performs, the better it will become at them.

 

Accountability

This is quite a buzz topic in the world of AI – especially following the accidents where self-driving cars have been involved. People who program AI need to ensure it is checked and policed and, if it’s not up to scratch, changed and decommissioned. Facebook did this last year, when it abandoned AI bots because they started talking in their own language which only they understood.
 
It’s a good example of AI getting out of control but that measures were in place to capture it and stop it in its tracks before it got out of hand. AI still needs to be controlled by humans and ultimately someone has to be accountable for any mistakes and flaws.

 

Customer Experience

There is no doubt AI will aid customer experience. Whether that is processing something quicker, reaching a safe conclusion faster or rendering important information quickly and accurately, these can and will add to an experience for a customer. However, individuals still like human interaction – they still like the emotional experience of buying (be that a product for sale or a legal service). And while it is likely that more straight forward advice and services will flow to AI, we will continue to see more complex, difficult, sensitive instances handled directly, often face to face, by humans.
 
You can’t teach people to code this into AI from the start. They would need to design the process they want to automate and the data they want to capture in an automated way as well a think about the touch points for the consumer. What type of experience would they want/need?
 
AI can readily fulfil the need to automated experiences, through a confirmation or an update, but we mustn’t close off human interaction just because AI can do this. AI will learn over time, it will become smarter and more adept over time. The role of humans must be to guide, develop, and consider the consumer touchpoints while always having an override or failsafe if the need arrives.


 

Security

Ray Chohan, SVP, Corporate Strategy at PatSnap, the world’s leading provider of R&D analytics.
 

With large-scale projects like Google’s Deep Mind and IBM’s Watson, we are only just beginning to realise the applications that AI could have within almost every aspect of life and business that is touched by technology. In terms of its application in datacentres, AI can and is having a significant impact on how the infrastructure is run and supported, everywhere from energy usage and cooling technology, to robotic handling of day-to-day maintenance and security.
 
For example, this patent published by EMC in 2014 describes the use of AI to improve the cost/performance storage and energy savings incurred by datacentre owners. EMC gave it the name Application Aware Intelligent Storage System. Its aim is described as “to always use the minimal amount of energy to achieve provisioning application SLOs with minimal storage resources and to deliver highest price/performance storage value at any time.” This type of innovation in the datacentre could have tremendous implications on both the environment and operating cost.
 
Another interesting application of AI in the datacentre comes from Oracle, in a patent that was published in May 2017. The patent describes a System and method for Distributed Denial of Service (DDoS) identification and prevention. Among other things, the author sees this invention as a step up from traditional intrusion detection methods that use machine learning, as they “typically require human interaction to improve rule-based binary classifications.” As IoT botnets become more present, Cisco expects DDoS attacks to reach approximately 17 million per year in 2020. With so much IT infrastructure moving toward cloud, the ability to leverage AI and minimise the damage of these attacks will become extremely valuable.
 
The companies innovating most in this area include Microsoft, Numenta, Amazon, Harris Corporation and IBM, and perhaps unsurprisingly, the United States is by far the most authoritative in this field, with 84% of the patents filed in this area.

 

 


related topics