Here’s How AI Could Diagnose You With Psychopathy

We’re fast becoming accustomed to clandestine observation. Quietly, in the background, algorithms have been watching our facial expressions, features, and behavioral mannerisms to try to establish a supposed “understanding” of such things as our job suitability, our sexuality, what “subcategory of person” we fit, and even our propensity for criminality

Of course, that’s on top of the mammoth and ongoing analysis of the vast digital footprints Big Tech companies use to fuel their hit-and-miss predictions. 

But what about an AI tool that can diagnose your mental health — or more specifically, whether you’re a psychopath — just by looking at you?  

Well folks, here we are. 

A study, Quantifying the psychopathic stare: Automated assessment of head motion is related to antisocial traits in forensic interviews, was recently published in the Journal of Research in Personality, which shows “promising” signs of just such a technology. 

Continue reading

Intentional Harm: Preparing for an Onslaught of AI-Enabled Crime

“The degree to which this diversity criminal acts may be enhanced by use of AI depends significantly on how embedded they are in a computational environment: robotics is rapidly advancing, but AI is better suited to participate in a bank fraud than a pub brawl. This preference for the digital rather than the physical world is a weak defence though as contemporary society is profoundly dependent on complex computational networks.”

AI-enabled future crime report

The field of AI ethics has received much (very worthy) attention of late. Once an obscure topic relegated to the sidelines of both tech and ethics conversations, the subject is now at the heart of a lively dialogue among the media, politicians, and even the general public. Everyone now has a perspective on how new technologies can harm human lives, and this can only have a preventative effect in the longterm. 

But whether it’s algorithmic bias, intrusive surveillance technology, or social engineering by coercive online platforms, the current discourse tends to center on the overzealous, questionable or destructive use of new tech, rather than outright criminality. Yet it would be foolish to discount the very real prospect of AI being systematically weaponized for unequivocally criminal purposes. 

As AI technology refines and multiplies, so do the methods of would-be attackers and fraudsters. And as our world becomes more networked, the attack surface grows and grows. 

In short, it is a very exciting time to be a technically-minded crook. 

Continue reading