30 Jan 2018

Empathy Training

“I taught an AI to empathise” is not a phrase one uses every day, but that’s exactly what the Deep Empathy program offers. A collaboration led by Scalable Cooperation at MIT Media Lab, and influenced by UNICEF Innovation, it seeks to pursue a scalable way to increase empathy.

By Tom Dent-Spargo

Deep Empathy, MiT
Screenshot of Deep Empathy

 

The Power of Images

The website greets visitors with an image of Boston transforming into a war-torn landscape, reminiscent of the news that comes from Syria on a regular basis – not a coincidence. The question posed by the team: “Can we use AI to increase empathy for victims of far-away disasters by making our homes appear similar to the homes of victims?” Utilising deep learning, Deep Empathy learns the characteristics of Syrian neighbourhoods that have been affected by conflict in order to visualise how other cities might look if they were similarly afflicted. In pressing people to see familiar sights put through the wringer, they want them to identify recognisable elements of their lives through the lens of those under incredibly different circumstances, to see another world.

Deep Empathy outsourced some of the training to the general public, showing a series of two images side-by-side from Syria and asking them to pick which inspires more empathy. By opening it up to a potentially enormous pool of agents, the algorithm can be trained in order to exponentially improve. It ends up being a two-way journey. In helping this AI to empathise, can it teach humans to care more?

At this stage this just a (admittedly complex) algorithm producing a simple result of image generation for an artistic thought experiment. Down the line comes the question, “If an AI can feel empathy, does it earn any rights with that?” This begins to enter the territory of “Electric Persons”. If thinking machines cease to be regarded as property and instead are attributed personhood, then they can absolutely gain rights with that. But they would also have to have some responsibilities to match, especially if it’s able to generate these images by itself with no human input. With these images potentially showcasing graphic and sensitive content, should it be able to disseminate them at will?

Of course, that won’t matter if they never reach the status of persons, and that’s a possibility that’s unlikely to be realised. Ryan Abbott has previously spoken to the journal on this issue, and believes that creating new legislation for ownership of creative machines’ output while still considering the machines themselves to be legally chattel is the solution.

 


related topics