A lot has been made of surveillance technology in recent years. Our once relatively benign CCTV setup has been given an AI-driven makeover. At the same time, lower production costs have facilitated a kind of “camera creep” evident in the boom in home security cameras, police bodycams and the trend for heightened employee surveillance.
Many cameras now also have so-called facial recognition technology baked in, allowing them to identify and track rule breakers. (Though volumes of evidence stands to show that the technology is often deeply flawed and discriminatory.)
And while surveillance cameras have become more pervasive and (arguably) more sophisticated in what they can identify — like faces or employee inattention or even a distinctive gait — bigger plans are afoot, and a recent investment boost for a small UK start-up called Mindtech Global might give us a clue as to how things will unfold.
The firm, that received $3.25M in their latest funding round, has created a platform called Chameleon which bosses claim will help accelerate the deployment of complex “AI vision systems” which can closely monitor and even predict human behavior.
Such predictive AI vision systems are not new, but they have been stymied in their rollout. Currently, those building them need to source and anonymize actual CCTV footage to train their systems. This is costly, time-consuming and clearly presents privacy concerns. The Mindtech Global Chameleon platform, however, cleverly avoids all these impediments.
The platform works by allowing it’s customers to quickly “build unlimited scenes and scenarios using photo-realistic smart 3D models”. This means they can use synthetic humans and places for training purposes thus, according to Mindtech Global, allowing them to help computers “understand and predict human interactions” while avoiding privacy issues and the compromising problems around diversity and bias.
When is comes to how such AI vision systems might be deployed, TechCrunch elucidates:
Consider the following: A kid slips from its parent’s hand at the mall. The synthetic CCTV running inside Mindtech’s scenario is trained thousands of times over how to spot it in real-time and alert staff. Another: a delivery robot meets kids playing in a street and works out how to avoid them. Finally: a passenger on the platform is behaving erratically too close to the rails — the CCTV is trained to automatically spot them and send help.
So far, so helpful. The article continues:
There is of course potential for darker applications, such a spotting petty theft inside supermarkets, or perhaps “optimising” hard-pressed warehouse workers in some dystopian fashion.
While it’s not fair to criticize Mindtech Global for the speculative malicious use of their technology, and it’s important to acknowledge that this platform seems to address many of the privacy and bias problems that plague AI vision, there is a third ethical concern that it gives rise to: increased and more intrusive surveillance.
This technology could catalyze the widespread use of cameras that not only monitor and sometimes identify us, but seek to pre-empt our very movements. Though on occasions this might mean helpfully reuniting a parent with a lost child or preventing a suicide, how often will it also cause us to unknowingly come under suspicion — or accusation — of rule breach or criminality?
For example, Mindtech Global website asserts that “monitoring customers and staff is essential,” indicating that their hawklike AI would be on-hand to intensively observe such shifty characters — i.e. hard-working staff and paying customers.”
This is a future in which we dispense with trust.
If technology like Chameleon does what it says it can (hasten predictive AI vision ubiquity), then we can expect to see it embedded in cameras across the board. Which means we can prepare to be tracked, monitored and “predicted” much more extensively (and with questionable accuracy). And while we might easily understand the benefits, we might want to ask ourselves if they justify whole populations living as non-consenting lab rats in a never-ending behavioral science experiment, under the constant veil of suspicion.