
We’re fast becoming accustomed to clandestine observation. Quietly, in the background, algorithms have been watching our facial expressions, features, and behavioral mannerisms to try to establish a supposed “understanding” of such things as our job suitability, our sexuality, what “subcategory of person” we fit, and even our propensity for criminality.
Of course, that’s on top of the mammoth and ongoing analysis of the vast digital footprints Big Tech companies use to fuel their hit-and-miss predictions.
But what about an AI tool that can diagnose your mental health — or more specifically, whether you’re a psychopath — just by looking at you?
Well folks, here we are.
A study, Quantifying the psychopathic stare: Automated assessment of head motion is related to antisocial traits in forensic interviews, was recently published in the Journal of Research in Personality, which shows “promising” signs of just such a technology.
What is it?
The study reportedly represents an “important first step” in demonstrating the feasibility of using computer vision in conjunction with psychology. Employing machine learning and image processing, the experimenters extracted the head movements from recorded interviews with 507 inmates during lengthy conversations. The algorithm then worked on each frame of the face individually, with experimenters hypothesizing that “head dynamics” would be related to psychopathic traits.
The team then used the 20-item Hare Psychology Checklist–Revised (PCL–R) to determine levels of psychopathy, and found that “As predicted, dwell times indicate that those with higher levels of psychopathic traits are characterized by more stationary head positions, focused directly towards the camera/interviewer, than were individuals low in psychopathic traits.”
So what?
As with many of the earlier examples, technology-driven results like this are increasingly being termed “phrenology 2.0” after the original spurious 19th century pseudoscience. Critics remind us of the famous axiom “correlation is not causation”, and implore the public not to give any heed to questionable experiments tested with small, non-representative samples.
Nevertheless, in a world where for many “data is truth”, it’s likely that such systems will be taken seriously, and perhaps eventually deployed. In such instances, they have the potential to do untold harm.
Unbelievably, researchers suggest that this AI could be used to “aid law enforcement” so that they might “understand the personality of the person being interviewed.” This should cause us to speculate about what damage a false positive would do in this context? How a finger-in-the-air AI guesstimate about someone’s mental health might lead to unfair/biased processing or — at the very least — an unhelpful interaction with a vulnerable person.
Worse still, this kind of simplistic “pre-diagnosis” could cast an individual in a certain light without them even knowing about it — or being able to provide extra context or a counter.
What else?
Though a stationary head might well be necessary for a psychopathy diagnosis, it is undoubtedly not sufficient. And without testing this software across cultures, generations, and subjects with different abilities, it is impossible to know who else this limited criteria might capture and inaccurately label with psychopathy. A diagnosis with significant implications.
As with every other computer vision-based technology, we must also worry about this kind of “remote diagnosis AI” being integrated into smart cameras in our doctor’s offices, supermarkets, and other public places. Or being used to analyze the video footage used to assess suitability for a job, loan application, or anything else.
If not here, in more authoritarian juridictions where intrusive observation technology is often weaponized against the citizenry.
Heightened vigilance
Responding to last year’s consultation on the European Commission’s strategy for data, digital rights campaign group Access Now pushed for a ban on:
“…the placing on the market, putting into service or use of AI systems that use physiological, behavioural or biometric data to infer attributes or characteristics of persons or groups which are not solely determined by such data or are not externally observable or whose complexity is not possible to fully capture in data, including but not limited to:
- Gender & gender identity
- Race
- Ethnic origin
- Political orientation
- Sexual orientation
- Mental health status
- Migration status
- Or other grounds on which discrimination is prohibited under Article 21 of the EU Charter of Fundamental Rights.”
These strong recommendations are increasingly being echoed by other groups and individuals. But we still need to be vigilant when it comes to any complex idea or condition that is simplified beyond recognition so it might be quantified and “understood” by AI systems. Especially when the tendency to place unquestioning trust in algorithmic output still abounds.