On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction?
At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think.
“The degree to which this diversity criminal acts may be enhanced by use of AI depends significantly on how embedded they are in a computational environment: robotics is rapidly advancing, but AI is better suited to participate in a bank fraud than a pub brawl. This preference for the digital rather than the physical world is a weak defence though as contemporary society is profoundly dependent on complex computational networks.”
AI-enabled future crime report
The field of AI ethics has received much (very worthy) attention of late. Once an obscure topic relegated to the sidelines of both tech and ethics conversations, the subject is now at the heart of a lively dialogue among the media, politicians, and even the general public. Everyone now has a perspective on how new technologies can harm human lives, and this can only have a preventative effect in the longterm.
But whether it’s algorithmic bias, intrusive surveillance technology, or social engineering by coercive online platforms, the current discourse tends to center on the overzealous, questionable or destructive use of new tech, rather than outright criminality. Yet it would be foolish to discount the very real prospect of AI being systematically weaponized for unequivocally criminal purposes.
As AI technology refines and multiplies, so do the methods of would-be attackers and fraudsters. And as our world becomes more networked, the attack surface grows and grows.
In short, it is a very exciting time to be a technically-minded crook.
In 2000, a group of researchers at Georgia Tech launched a project they called “The Aware Home.” The collective of computer scientists and engineers built a three-story experimental home with the intent of producing an environment that was “capable of knowing information about itself and the whereabouts and activities of its inhabitants.” The team installed a vast network of “context aware sensors” throughout the house and on wearable computers worn by the home’s occupants. The hope was to establish an entirely new domain of knowledge — one that would create efficiencies in home management, improve health and well-being, and provide support for groups like the elderly.
“GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing.”
Regini Rini, Philosopher
In recent years, the AI circus really has come to town and we’ve been treated to a veritable parade of technical aberrations seeking to dazzle us with their human-like intelligence. Many of these sideshows have been “embodied” AI, where the physical form usually functions as a cunning disguise for a clunky, pre-programmed bot. Like the world’s first “AI anchor”, launched by a Chinese TV network and — how could we ever forget — Sophia, Saudi Arabia’s first robotic citizen.
But last month there was a furore around something altogether more serious. A system The Verge called, “an invention that could end up defining the decade to come.” It’s name is GPT-3, and it could certainly make our future a lot more complicated.
So, what is all the fuss about? And how might this supposed tectonic shift in technological development change the lives of the rest of us ?
Writing for Aeon last week, Martin Parker, a professor of organization studies at the University of Bristol in the UK, relayed the origins of the word “management”, explaining:
“It is derived from the Italian mano, meaning hand, and it’s expansion into maneggiare, the activity of handling and training a horse carried out in a maneggio – a riding school. From this form of manual control, the word has expanded into a general activity of training and handling people. It is a word that originates with ideas of control, of a docile or wilful creature that must be subordinated to the instructions of the master.”
Though we might prefer to believe that its meaning has evolved since then to convey something more respectful and collaborative, it is still the case that workplace leaders and managers have mastery over their staff. Promotions, opportunities, hirings and firings — all life-altering events — are subject to their authority.
It is a mighty responsibility, and abuse of managerial power can have devastating consequences.
In Shoshana Zuboff’s 2019 book The Age of Surveillance Capitalism, she recalls the response to the launch of Google Glass in 2012. Zuboff describes public horror, as well as loud protestations from privacy advocates who were deeply concerned that the product’s undetectable recording of people and places threatened to eliminate “a person’s reasonable expectation of privacy and/or anonymity.”
Zuboff describes the product:
Google Glass combined computation, communication, photography, GPS tracking, data retrieval, and audio and video recording capabilities in a wearable format patterned on eyeglasses. The data it gathered — location, audio, video, photos, and other personal information — moved from the device to Google’s servers.
At the time, campaigners warned of a potential chilling effect on the population if Google Glass were to be married with new facial recognition technology, and in 2013 a congressional privacy caucus asked then Google CEO Larry Page for assurances on privacy safeguards for the product.
Eventually, after visceral public rejection, Google parked Glass in 2015 with a short blog announcing that they would be working on future versions. And although we never saw the relaunch of a follow-up consumer Glass, the product didn’t disappear into the sunset as some had predicted. Instead, Google took the opportunity to regroup and redirect, unwilling to turn its back on the chance of harvesting valuable swathes of what Zuboff terms “behavioral surplus data”, or cede this wearables turf to a rival.
With COVID-19 lockdown restrictions issued across the globe, millions of us have been forced to hunker down “in place”, or severely limit our movements outside of the home. On learning this, most will have reached reflexively for the nearest device — if we didn’t learn it from that device to begin with. Yet mostly we are cinched in a love-hate relationship with the presiding artefacts of our time; and often we resent tech’s power over us.
Nevertheless, new circumstances can breed new attitudes. Despite having spent the last few years debating whether or not technology will destroy us, March 2020 could be the month that at least partially redeems our faith in technology by demonstrating how fortunate we are to have some incredibly sophisticated tools in our homes.
For many, they are currently the sole portal to the outside world.
The dust has now settled after the madness of the world’s biggest annual tech fest, the Consumer Electronics Show (CES) in Las Vegas, NV. Since the show’s kick-off in early January, a parade of weird and wonderful new devices have dominated tech news and bylines; from lab produced pork to RollBot, Charmin’s robotic savior for those “stranded on the commode without a roll.”
The event itself really isn’t for the faint-hearted. It’s easy to feel overwhelmed by the sheer volume of companies vying to embed their (often ridiculous) tech gadgetry into our lives – both at work and at play. There is, of course, lots of money to be made from finding that elusive sweet spot; the point at which problem-solving, convenience, and affordability converge.
“If you’ve got something that is independent of your mind, which has causal powers, which you can perceive in all these ways, to me you’re a long way toward being real”, the philosopher David Chalmers recently told Prashanth Ramakrishnain an interview for the New York Times. Chalmers invoked remarks by fellow Australian philosopher Samuel Alexander who said that: “To be real is to have causal powers”, and science fiction writer Philip K. Dick who said that, “a real thing is something that doesn’t go away when you stop believing in it.”
Professor Chalmers’ comments were made in reference to the new and increasingly sophisticated world of virtual reality; something he believes has the status of a “subreality” (or similar) within our known physical reality. A place that still exists independent of our imaginations, where actions have consequences.
Chalmers draws parallels with our trusted physical reality, which is already so illusory on many levels. After all, the brain has no direct contact with the world and is reliant upon the mediation of our senses. As the mathematician-turned-philosopher points out, science tells us that vivid experiences like color are “just a bunch of wavelengths arising from the physical reflectance properties of objects that produce a certain kind of experience in us.”