It’s difficult to read, or even talk about technology at the moment without that word “ethics” creeping in. How will AI products affect users down-the-line? Can algorithmic decisions factor in the good of society? How might we reduce the number of fatal road collisions? What tools can we employ to prevent or solve all crime?
Now, let’s just make it clear from the off: these are all entirely honorable motives, and their proponents should be lauded. But sometimes even the drive toward an admiral aim – the prevention bad consequences – can ignore critical tensions that have been vexing thinkers for years.
Even if we agree that the consequences of an act are of real import, there are still other human values that can – and should – compete with them when we’re assimilating the best course of action.
Let me give an example. Although we should wish to avoid hurting a friend, most of us would set this aside if we had information about that friend’s unfaithful spouse. Here, we would be valuing something like “honesty” or “dignity” above their hurt feelings.
In another (rather more harrowing) illustration, it has been observed that great many of us would refuse to smother a baby, even if allowing it to cry out revealed the hiding place of innocent civilians to a blood thirty militia. Here, we would be privileging the intrinsic value of the child by refusing to use them as a “means to an end.”
In brief: there are times when a universal principle bests the foreseeable consequences when it comes to our ethical concern.
All that said – for the most part – principles and consequential considerations rub along well enough. And it isn’t difficult to see how values like “honesty” and “respect” evolved from our weighing the consequences of dishonesty and disrespect. Nevertheless, there are occasions when they do clash – and we should always be alive to these.
One of those battlegrounds is surveillance.
Artificial intelligence is bringing a new intensity to surveillance as we know it. For the first time, it is turning passive security cameras into dynamic, crime-solving machines. To be clear, it is entirely possible that we’re fast approaching a future in which all public crime could be solved – if only we assent to universal surveillance.
The technology already exists.
The specific development concerns the way we search camera footage for evidence of a crime. Previously, this process was dependent upon the sensory abilities of human beings who would be tasked with surveying as many hours of film as they were able. Obviously, this method has its limitations – our time, our ability to concentrate, our powers of observation (to name but a few). But now the existing, imperfect system is being revolutionized by AI that can skim through hours of surveillance footage instantly, allowing law enforcement to swiftly establish the facts of any crime that occurs within the range of a camera.
Products like IC Realtime have been described as “Google for CCTV”, and The Verge recently described a demo where the technology was asked retrieve video frames of a man wearing red, a “UPS van”, and “police cars” from around 40 cameras placed on an industrial park. The results were incredible.
The company’s CEO commented further on the AI’s potential: “Let’s say there’s a robbery and you don’t really know what happened, but there was a Jeep Wrangler speeding east afterward. So we go in, we search for ‘Jeep Wrangler,’ and there it is.” On-screen, clips begin to populate the feed, showing different Jeep Wranglers gliding past.”
Incredibly, such systems can run on footage from pretty much any camera, and don’t require the internet to work. And, James Vincent writes, they are getting better all the time: “In the same way that machine learning has made swift gains in its ability to identify objects, the skill of analyzing scenes, activities, and movements is expected to rapidly improve.”
He also anticipates a future in which law enforcement can use mugshots and facial recognition to track down perpetrators. In addition, manufacturers are looking at training the system to recognize predictive behaviors and anticipate crime before it occurs….
————————-
Okay, so you get it. Clearing-up doubts about the effectiveness of surveillance cameras could lead to their ubiquity, and the introduction penetrative, widespread surveillance. But what’s so bad about that? After all, we’d only search the cameras post-facto, and any move towards the obsolescence of public crime is surely worth wanting?
Indeed, it is. Imagine a world in which all crimes were either entirely deterred or immediately solved. A world without muggings, car jackings, rapes, riots, late night assaults, stabbings, shootings, thefts, and burglaries. All of those vicious, physical crimes that frighten us the most. Surely there is no principle that could override the pursuit of a better society without these hideous things happening in public places?
Well, many would say that – actually – there is. And it is a familiar one: our right to privacy. Linked to human dignity and ideas of liberty, it is a very popular opinion that we should be able to go about our daily lives without being tracked and monitored by external parties. Indeed, privacy is not a niche interest, but a fundamental good enshrined in the legislation of over 150 countries. That’s not something that happens by accident, but rather because it is of deep concern to who we are as people.
And so herein the tension lies. If we protect privacy, it could be at the cost of those who fall foul of crimes (/criminals) that would otherwise have been prevented (/punished). If we introduce universal surveillance, then we compromise the privacy of individuals. Something widely held to play an important role in our intrinsically valuable human existences.
These are two valid ethical perspectives, and glibly favoring one over the other is perilous. Examples like this go to prove that even with the very best intentions, we can do serious harm.
It would be easy at this point to bat the dilemma over to the tech industry. To implore them to find a way to “build in” features to protect our privacy. But this would be wrong, and counterproductive. A company producing state-of-the-art surveillance tech has every right to optimize the ways in which it monitors, and to improve its powers of identification. It is not up to them to protect us from its use.
As Google CEO, Sundar Pichai, commented at Davos in reference to hate speech – it is up to society to dictate reasonable parameters here, not tech firms. Like privacy and crime surveillance, free speech and hate censorship have a complex relationship where indulging too heavily in one can badly impede the other. The stakes are high, and that is why the balance should be determined not by them, but by us.
At the moment, we have time. Security cameras do not have every parking lot, school, and sidewalk within range. Nor do most cameras have adequate angles or sufficiently high-resolution footage, as yet. Current systems also currently struggle with crowds.
Nevertheless, it is critical that as companies develop this technology, we start conversations about what constitutes its reasonable use. Which scenarios should promote safety over privacy, and vice versa?
And, of course, the opposition of principles and good consequences is similarly relevant to swathes of other new technologies – not just to surveillance and social media. So, when we are evaluating the latest AI widget for its ethical permissibility, we must remember that it is not enough to simply reflect on the consequences of its use. A true stress-test must also seek to identify relevant values and principles (be they specific to community, geography, or more general), and make every effort to uphold them.
Pingback: 3 of the creepiest things about ‘deepfake’ video
Pingback: 3 of the creepiest things about ‘deepfake’ video - Briefing 'em
Pingback: 3 of the creepiest things about ‘deepfake’ video – Technology NEWS
Pingback: 3 of the creepiest things about ‘deepfake’ video – Technology Blog
Pingback: 3 of the creepiest things about ‘deepfake’ video - Hawkins Global Pte Ltd
Pingback: 3 of the creepiest things about ‘deepfake’ video – S&A – Nader Group