A lot has been made of surveillance technology in recent years. Our once relatively benign CCTV setup has been given an AI-driven makeover. At the same time, lower production costs have facilitated a kind of “camera creep” evident in the boom in home security cameras, police bodycams and the trend for heightened employee surveillance.
And while surveillance cameras have become more pervasive and (arguably) more sophisticated in what they can identify — like faces or employee inattention or even a distinctive gait — bigger plans are afoot, and a recent investment boost for a small UK start-up called Mindtech Global might give us a clue as to how things will unfold.
In 2000, a group of researchers at Georgia Tech launched a project they called “The Aware Home.” The collective of computer scientists and engineers built a three-story experimental home with the intent of producing an environment that was “capable of knowing information about itself and the whereabouts and activities of its inhabitants.” The team installed a vast network of “context aware sensors” throughout the house and on wearable computers worn by the home’s occupants. The hope was to establish an entirely new domain of knowledge — one that would create efficiencies in home management, improve health and well-being, and provide support for groups like the elderly.
Writing for Aeon last week, Martin Parker, a professor of organization studies at the University of Bristol in the UK, relayed the origins of the word “management”, explaining:
“It is derived from the Italian mano, meaning hand, and it’s expansion into maneggiare, the activity of handling and training a horse carried out in a maneggio – a riding school. From this form of manual control, the word has expanded into a general activity of training and handling people. It is a word that originates with ideas of control, of a docile or wilful creature that must be subordinated to the instructions of the master.”
Though we might prefer to believe that its meaning has evolved since then to convey something more respectful and collaborative, it is still the case that workplace leaders and managers have mastery over their staff. Promotions, opportunities, hirings and firings — all life-altering events — are subject to their authority.
It is a mighty responsibility, and abuse of managerial power can have devastating consequences.
In Shoshana Zuboff’s 2019 book The Age of Surveillance Capitalism, she recalls the response to the launch of Google Glass in 2012. Zuboff describes public horror, as well as loud protestations from privacy advocates who were deeply concerned that the product’s undetectable recording of people and places threatened to eliminate “a person’s reasonable expectation of privacy and/or anonymity.”
Zuboff describes the product:
Google Glass combined computation, communication, photography, GPS tracking, data retrieval, and audio and video recording capabilities in a wearable format patterned on eyeglasses. The data it gathered — location, audio, video, photos, and other personal information — moved from the device to Google’s servers.
At the time, campaigners warned of a potential chilling effect on the population if Google Glass were to be married with new facial recognition technology, and in 2013 a congressional privacy caucus asked then Google CEO Larry Page for assurances on privacy safeguards for the product.
Eventually, after visceral public rejection, Google parked Glass in 2015 with a short blog announcing that they would be working on future versions. And although we never saw the relaunch of a follow-up consumer Glass, the product didn’t disappear into the sunset as some had predicted. Instead, Google took the opportunity to regroup and redirect, unwilling to turn its back on the chance of harvesting valuable swathes of what Zuboff terms “behavioral surplus data”, or cede this wearables turf to a rival.
The rise and rise of tech, and the popularity of shows like Altered Carbon, is placing the idea of augmented humanity front-and-center. So-called “body hacking” is already popular enough to have its own annual convention, and well-respected AI pioneers like Siri inventor Tom Gruber have been evangelizing about technology that can, and will, be used to help humans achieve superhuman levels of cognitive function. Giving a TED Talk last year, Gruber asked: Continue reading →
It’s difficult to read, or even talk about technology at the moment without that word “ethics” creeping in. How will AI products affect users down-the-line? Can algorithmic decisions factor in the good of society? How might we reduce the number of fatal road collisions? What tools can we employ to prevent or solve all crime?
Now, let’s just make it clear from the off: these are all entirely honorable motives, and their proponents should be lauded. But sometimes even the drive toward an admiral aim – the prevention bad consequences – can ignore critical tensions that have been vexing thinkers for years.
Even if we agree that the consequences of an act are of real import, there are still other human values that can – and should – compete with them when we’re assimilating the best course of action. Continue reading →