AI that understands how you really feel

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Emotion AI 3

The way we interact with technology keeps changing. Of late, many more of us are using speech and gesture to give instructions to our devices, and it’s actually starting to feel natural. We tell Alexa to turn the lights off, silence our smart watches by smothering them with our palms, and unlock our phones with a look. For this to work as seamlessly as it does, our devices have to pensively watch and listen to us. Pretty soon they could begin to understand and anticipate our emotional needs, too.

The move towards what’s been called implicit understanding – in contrast with explicit interaction – will be facilitated by technologies like emotion-tracking AI. Technology that uses cues from our vocal tone, facial expressions and other micro-movements to determine our mood and, from there, our needs. According to researchers at Gartner, very soon our fridge will be able to suggest food to match our feelings, and research VP Annette Zimmerman has even claimed that, “By 2022, your personal device will know more about your emotional state than your own family.”

Perhaps the most high-profile use of emotion-tracking currently is in recruitment. Companies like HireVue use this technology to help firms – including Unilever – select the best candidates for open positions. They do this by having their assessment model process application videos submitted by prospective employees. HireVue’s Chief Technology Officer, Loren Larsen, explained how they use these systems:

“We primarily work with the underlying action units, which are essentially ground truth for what is happening in the face.  The HireVue Assessments model — the algorithm — is looking for the things present in an interview that predict performance.  It does not have any preconceived notions of which features matter but learns this from the data.”

In short, the AI uses data from existing employees or candidates to find correlations between things that are known (e.g. facts relating to success) and the detectable features we present during a video interview. Our frowns, our smiles, our gesticulations, our vocal pitch. It isn’t reinterpreting human behavior, but trying to understand it as it is currently understood by humans. What subtle signals do we give off when we’re confident? Or resourceful? Larsen also says that HireVue tries to ensure that the datasets used to train the model are culturally compatible with each candidate.

Although this kind of technology reports back to the system owner, it is entirely possible that its sensitivity could begin to tutor other computational machines to respond right back to us. This means that soon we might live with personal devices that calm us when they sense our nervousness, co-pilot a vehicle when they detect a fatigued driver, or challenge us when we’re filled with contempt (yes, contempt is apparently a measurable and “datafiable” emotion…). This interaction already has a term with some lineage – it’s known as “affective computing.”

So what concerns should we have, if any? First, we should pay attention to just how the receptive element calculates our moods. Even Paul Elkman, the co-developer of the Facial Action Coding System used to train these intelligent algorithms admits, “no one has ever published research that shows automated systems are accurate.” Even emotion-tracking industry leader Affectiva has confessed that there are challenges with collecting data on less frequent emotions, like pride or inspiration, and Larsen agrees that “anything done with machine learning is imperfect.” What will it mean to be misinterpreted in a future world in which we are more dependent on emotionally intuitive systems?

Second, if emotion-tracking AI becomes cheaply and widely available we could see it integrated with a variety of other technologies – like the policing systems currently deployed by the Chinese government that run on facial recognition. Could some states end-up with Minority Report-style policing whereby political resistance could be evidenced with a facial expression or vocal inflection? What about if our employers were able to read our attitudes? Or our relatives for that matter? Relationships could shift immeasurably if we move from a world of passive noticing to one of active monitoring.

Yet, although it’s important to be vigilant about AI concerns, it would be remiss not to consider the benefits emotion-reading, empathic devices could yield. These are also plentiful. Emotionally responsive systems could transform education for struggling students by sensing and adapting to their despondency, provide support and companionship for those suffering with depression, and open up the world of emotion and expression for those with autism. And these are just a few applications – doubtless there are many others in the works.

Emotions are not always easy to read. Even for humans. Our subtle, involuntary responses do not always land when we need them to. Intensifying their detection could be just what is needed to overcome cognitive biases and other human flaws of judgment that happen all the time. Empathetic AI might seem cold and dispassionate to some of us, but it is coming fast. Its imminent arrival heralds a step-change in our relationship with computational devices and, ultimately, one another.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s