“The degree to which this diversity criminal acts may be enhanced by use of AI depends significantly on how embedded they are in a computational environment: robotics is rapidly advancing, but AI is better suited to participate in a bank fraud than a pub brawl. This preference for the digital rather than the physical world is a weak defence though as contemporary society is profoundly dependent on complex computational networks.”
AI-enabled future crime report

The field of AI ethics has received much (very worthy) attention of late. Once an obscure topic relegated to the sidelines of both tech and ethics conversations, the subject is now at the heart of a lively dialogue among the media, politicians, and even the general public. Everyone now has a perspective on how new technologies can harm human lives, and this can only have a preventative effect in the longterm.
But whether it’s algorithmic bias, intrusive surveillance technology, or social engineering by coercive online platforms, the current discourse tends to center on the overzealous, questionable or destructive use of new tech, rather than outright criminality. Yet it would be foolish to discount the very real prospect of AI being systematically weaponized for unequivocally criminal purposes.
As AI technology refines and multiplies, so do the methods of would-be attackers and fraudsters. And as our world becomes more networked, the attack surface grows and grows.
In short, it is a very exciting time to be a technically-minded crook.
Last month, researchers at University College, London (UCL) in the UK published a paper that grapples with the length and breadth of what they call “AI-enabled future crime.” Building on workshops that convened a diverse group of stakeholders from security, academia, public policy and the private sector, the study tries to understand and rank the various criminal threats posed by AI technologies.
In essence, the project seeks to understand where new bogeymen may lurk. Here’s a rundown of those they deem the worst.
- Audio/Visual Impersonation
This blog has tackled the issue of deepfakes extensively, so it’s no surprise to read that experts now rank the tech as a serious criminal threat. The UCL study gives some pretty evocative examples of how audio/video impersonation might be deployed maliciously, including bad actors pretending to be children to extract to funds from elderly parents, impersonating individuals over the phone to request access to secure systems, and the generation of fake video that shows public figures speaking or acting reprehensibly in order to manipulate support. We’ve already seen examples of the latter, and though deepfakes are currently largely detectable, it’s notable that the experts consulted for the research said that these dupes would ultimately be very difficult to defeat.
Somewhat depressingly, the group determined that the only way to combat deepfakes and other audio/visual impersonation may be through changes to citizen behaviour — i.e. encouraging people to distrust visual media which, quite clearly, constitutes an indirect societal harm in-and-of itself. Another secondary effect noted in the report is that if even a small fraction of visual evidence is proven to be convincing fakes, then it becomes much easier to discredit genuine evidence, undermining criminal investigation and the credibility of political and social institutions that rely on trustworthy communications.
- The Weaponization of Driverless Cars
While much of the discussion around autonomous vehicles centers on safety and judgment, this report expresses a further concern — whether the introduction of this technology could catalyze an explosion in “vehicular terrorism” by circumventing the need to hire willing, suicidal drivers? It asks whether a hijacked autonomous system would allow single perpetrators to perform multiple attacks at once, while acknowledging that this would probably take significant skill and organization. Even with this being the case, the experts consulted decided that such scenarios are “highly achievable and harmful.”
It is certainly sobering to consider that events like the 2016 Bastille Day attack in Nice could be both replicable and scalable.
- Tailored Phishing

Okay, so phishing probably doesn’t keep many of us up at night. We’re pretty internet savvy when it comes to detecting bogus looking bank emails and pleas to correct our sign-in information from fraudsters posing as Apple or Paypal. However, signs suggest that phishing is about to get a whole lot more sophisticated, as criminals change tactic from the old “spray and pray” and begin using AI that can convincingly tailor those duplicitous emails so they look incredibly legit. By scraping information from social media they can mimic the style of a trusted party and really hone in on their target (and that target’s specific vulnerabilities).
It’s essentially highly dynamic and automated spear phishing that can learn what works and what doesn’t, and adapt in real-time.
- Disrupting AI-controlled systems
It’s now becoming a cliche to say that AI is all around us, but it is nevertheless true. Most of us will have interacted with some kind AI (in the shape of our smartphones) before we even get out of bed. We also use AI in various ways throughout our homes and as part of our working lives but, importantly, governments and large institutions also lean heavily on AI-enabled systems to help control and coordinate critical infrastructure. The UCL study notes that, in the case of the latter, “learning based systems are often deployed for efficiency and convenience rather than robustness.”
This all spells good news for malicious criminals and terrorists keen to cause widespread power failures, create traffic gridlock, disrupt food supplies and wreak havoc with financial transactions. And the study confirms that the more complex a control system is, the more difficult it can be to defend completely.
The saving grace here is that critical infrastructure attacks typically require detailed knowledge of, or access to, the systems involved. That said, there are clearly worrying vulnerabilities that emerge in parallel with AI advancements.
- Large-Scale Blackmail

Blackmail is a strange crime in that it explicitly solicits consent from its victim. Typically, would-be blackmailers are also subject to limiting factors, like the acquisition of evidence, before they can pursue their target. The UCL study predicts that AI will change all that, allowing criminals to harvest vast quantities of sensitive personal information from social media, or large personal datasets like email logs, browser history, hard drives, etc., sift through it for vulnerabilities and then target the individuals concerned with tailored communications. Perhaps worse, AI could even be used to generate fake evidence for the purposes of blackmail.
The experts involved in the research ranked this as one of the most troubling AI crimes as it is difficult to defeat for the same reasons as regular blackmail — the reluctance of the victims to come forward due to fear of embarrassment or exposure.
- AI-Authored Fake News
Back in August there was much ado about GPT-3, OpenAI’s Frakensteinian language model that can write, create, and even code thanks to being trained on the corpus of the works of all mankind (basically). Given AI practitioners’ determined — and sometimes baffling — focus on having technology create “things that pass as human”, it’s unsurprising that false information and fake news is now near the top of the list of things we should be getting really concerned about.
Specifically, the UCL research expresses concern that fake news in sufficient quantities could distract attention away from real information, especially given that AI can generate convincing content at great speed, with surprising specificity, and in different versions that can be channeled through multiple sources to boost visibility and credibility.
These six examples of AI-enabled crimes were those ranked most hazardous at this moment in time. But the “best of the rest” include military robots, snake oil (criminals using jargon to sell fake AI services to the unsuspecting), data poisoning (deliberately introducing bias into datasets), tricking facial recognition, financial “market bombs”, AI-assisted stalking, and many, many more.
The more domains AI touches, the more possibilities there are. And as with burgeoning conversations around AI ethics, technology makers will have to get wise to how their products would be vulnerable to — or weaponized for — these types of attacks and frauds. In this strange new world, we’re learning very quickly that we have to more thoughtful and creative than the criminal masterminds that would seek to use exciting new tools for harm and profit.