You auto-complete me: romancing the bot

This is an excerpt from an essay by Maya Indira Ganesh, written for and published as part of the Bodies of Evidence collection of Deep Dives. The Bodies of Evidence collection, edited by Bishakha Datta and Richa Kaul Padte, is a collaboration between Point of View and the Centre for Internet and Society, undertaken as part of the Big Data for Development Network supported by International Development Research Centre, Canada.

 

Please read the full essay on Deep Dives: You auto-complete me: romancing the bot

Maya Indira Ganesh: Website and Twitter


I feel like Kismet the Robot.

Kismet is a flappy-eared animatronic head with oversized eyeballs and bushy eyebrows. Connected to cameras and sensors, it exhibits the six primary human emotions identified by psychologist Paul Ekman: happiness, sadness, disgust, surprise, anger, and fear.

Scholar Katherine Hayles says that Kismet was built as an ‘ecological whole’ to respond to both humans and the environment. ‘The community,’ she writes, ‘understood as the robot plus its human interlocutors, is greater than the sum of its parts, because the robot’s design and programming have been created to optimise interactions with humans.’

In other words, Kismet may have ‘social intelligence’.

Kismet’s creator Cynthia Breazal explains this through a telling example. If someone comes too close to it, Kismet retracts its head as if to suggest that its personal space is being violated, or that it is shy. In reality, it is trying to adjust its camera so that it can properly see whatever is in front of it. But it is the human interacting with Kismet who interprets this retraction as the robot requiring its own space by moving back. Breazal says, ‘Human interpretation and response make the robot’s actions more meaningful than they otherwise would be.’

In other words, humans interpret Kismet’s social intelligence as ‘emotional intelligence’...

Kismet was built at the start of a new field called affective computing, which is now branded as ‘emotion AI’. Affective computing is about analysing human facial expressions, gait and stance into a map of emotional states. Here is what Affectiva, one of the companies developing this technology, says about how it works:

‘Humans use a lot of non-verbal cues, such as facial expressions, gesture, body language and tone of voice, to communicate their emotions. Our vision is to develop Emotion AI that can detect emotion just the way humans do. Our technology first identifies a human face in real time or in an image or video. Computer vision algorithms then identify key landmarks on the face…[and] deep learning algorithms analyse pixels in those regions to classify facial expressions. Combinations of these facial expressions are then mapped to emotions.’

But there is also a more sinister aspect to this digitised love-fest. Our faces, voices, and selfies are being used to collect data to train future bots to be more realistic. There is an entire industry of Emotion AI that harvests human emotional data to build technologies that we are supposed to enjoy because they appear more human. But it often comes down to a question of social control, because the same emotional data is used to track, monitor and regulate our own emotions and behaviours...

 

Author

Sumandro Chattapadhyay

As a Director at CIS, I co-lead the researchers@work programme, and engage with academic and policy research on data governance and digital economy. I can be reached at sumandro[at]cis-india[dot]org.