The Eyes Have It: Three Reasons to be Cautious About Emotion-Tracking Recruitment AI

facial recognition

Predictive, data-driven software is becoming ubiquitous, and as such our reliance upon it is steadily intensifying. The locus of knowledge is becoming external to us again for the first time since the onset of humanism in the 18th century, and we increasingly prefer the forecasts of artificially intelligent systems to our own experience or intuition.

Of all the arenas in which these predictions fascinate and compel our decision-making, perhaps the most prevalent are those that see algorithms foretell the behaviors of our fellow human beings. What they prefer, what they react to, where they go, who they’ll flirt with, whether they’re likely pay back a loan, or even commit a crime.

Quite simply, we are coming to believe that machines know us better than we can know ourselves. 

Perhaps the most jarring example of this new reality comes in the form of emotion-tracking AI. These systems claim to be able to read our moods, emotions and personality traits by analyzing the micro movements of our faces. According to practitioners like Human, such systems can make unbiased assignations about people in a way that bypasses the highly flawed cognitive biases of mere mortals.

Unsurprisingly, this software is slowly being utilized by recruiters keen to circumvent human prejudices with regards to factors like race and gender – but is it really the case that smart AI like this can be ethically neutral? Outside of popular concerns about data privacy, here are three reasons to be cautious:

1. Humans are complex

Humans are vastly complex, and yet the very nature of AI systems is that they seek to simplify complexity into easily digestible chunks of information. Swathes of data are often fairly crudely categorized, and these categories are then used as the basis for further extrapolation. A glance, a click, an address, a purchase – they all become proxies for something else; e.g. who we are, what we earn, how we dress, which cereal we prefer.

In the case of emotion-tracking software, small facial cues like frowns and smiles are ultimately taken to signify something more profound about the way we behave generally – e.g. whether we’re “honest” or “passionate”. And yet, however strong the correlation is between – say – a frown and confusion, we must also remember the golden rule: that correlation does not imply causation. Just because these two occurrences often go hand-in-hand does not make it a provable fact that that all frowns are caused by confusion, or that either one of these factors automatically implies the presence of the other.

Even Paul Elkman, the co-developer of the Facial Action Coding System (FACS) used to train these intelligent algorithms admits that, “no one has ever published research that shows automated systems are accurate.”

These machines are supplying evidence – but not proof – of certain “personality traits”. And their findings come from plowing through extensive – but not exhaustive – databases of human expression. Currently, AI tracking software is part of a hiring managers toolkit, but we should be wary its robustness of if ever its influence grows.

2. Emotions vs. ability

Even if we could unassailably prove the reliability an AI emotion-tracker, other important considerations remain.  People seek employment for a variety of reasons; most simply want to feed their families, pay their rent, and perhaps take a vacation once in a while. Not every role published on the job market is likely to elicit emotions like “passion”, “curiosity”, or a meaningful level of enthusiasm. And yet it feels like this alone shouldn’t exclude a decent, capable candidate from the hiring process and the chance of success.

It’s reasonable to assert that not every eager applicant is right for the role, and not every potentially excellent employee wants to lead a cheer about the job or the company they hope to be hired by. But Loren Larsen, the chief technology officer at HireVue – a company that uses this AI and is employed by Unilever told the FT that emotion-tracking could actively privilege the “right” (i.e. positive, enthusiastic) emotion over historical data-points like qualifications and experience:

“[Recruiters] get to spend time with the best people instead of those who got through the resume screening. You don’t knock out the right person because they went to the wrong school.”

In many ways, it’s easy to view this as a positive. Damaging establishment biases and “old school tie” mentalities would be cut-off at their source, but it also appears to increase the opportunity for other candidates to be deselected on the seemingly superficial basis of not making the right face at the right time (to put it crudely)? If the unenthusiastic are doomed to remain unemployed, it feels like we are replacing one pernicious feedback loop with another.

Making a decision that is not a wrong decision does not always mean making the right decision. It can just mean making a different wrong decision.

3. Changing our behavior

Finally, one of the main objections to the idea that we must all have had a top education or hold specific qualifications to succeed, it is that it forces a kind of homogenization, whilst at the same time giving priority members of society who are better able to attain certain standards (usually due to unearned privilege). As such, by insisting on strict criterion when it comes to background and experience, employers can obstruct many good applicants and, increasingly, there’s a consensus that they need to look beyond the cookie-cutter mold to broader signs of promise.

And yet whilst, as discussed, emotion-tracking merrily avoids forcing us to be Harvard-educated over-achievers in order to succeed – might it not just encourage us to be some other way? A way that changes our behaviors and how we express ourselves as humans?

No longer will firms simply dictate our university major or how many internships we must suffer, they could also pre-determine the ways in which we are in the world. This could cause one of two things to happen – i) people who naturally exude honesty, dedication and enthusiasm via their facial movements bounce-up to the top of the list, perhaps bypassing more capable candidates, and ii) those who continually fail to find a job decide to experiment with the way they present themselves until they hit upon a facade that appeals to larger corporations.

What’s wrong with that you may ask? Well arguably this hands large firms the power to dictate the parameters of a mass behavioral homogenization. Eccentrics, shyer folk, those who naturally suppress emotional responses, or cannot for some physical or psychological reason, could find themselves factored out by an algorithm. If you don’t behave in a way that aligns with a new, desirable norm, then you can shape up or ship out…

What’s more, over time we might not simply see these people edited from the hiring process, but also vanishing from society more broadly.


There are – inevitably – foils to some of my objections. The vastness of the databases held by companies like Affectiva mean that the range of facial movements being monitored can be pretty broad, and cover a span of cultural (and other) differences. And yet, unless the system has surveyed a database comprising the entire human race, it is still dependent upon comparatively narrow sets of examples. This is a problem if you unknowingly express yourself in a way that is atypical.

Lastly, when considering new, seemingly neutral mechanisms like this it is always important to remember that no system is totally impartial. Whether bias slips in via a human programmer or the balance of data, it can sit quietly festering in the background, damaging opportunities for a minority of people.

But also, as I have tried to demonstrate, even where damaging biases are removed, there are still reasons to be cautious when abdicating judgment and handing it to this kind of AI. We must be sure that the path we’re following leads somewhere we wish to go.

Whatever the future holds for emotion-tracking, we should not simply accept its results as fair or fact. We must question, critique and – when needed – deploy our own human judgment and semantic knowledge. Sometimes machines move quickly and “talk a good talk” but, at least at this stage, they still do not truly understand the human condition. We shouldn’t allow this speed, convenience and slick AI marketing to convince us otherwise. 

 

 

 

 

 

3 thoughts on “The Eyes Have It: Three Reasons to be Cautious About Emotion-Tracking Recruitment AI

  1. It’s not clear that using such technology in hiring is even legal. I’m not a lawyer, but I’m told that in Canada, at least, this would likely be considered legally problematic, from a privacy law point of view.

    Like

    • That’s very interesting. I was somewhat surprised that the FT article mentioned that data processing by AI companies may not be covered by the GDPR in Europe. Generally, it all seems like a bit of a “wild west” in terms of privacy…

      Like

  2. Pingback: 3 reasons to question the use of emotion-tracking AI in recruiting – Technology NEWS

Leave a comment