Of course, you’ve heard this story many, many times before. An older woman looking for love and companionship meets a predator posing as a lonely heart, only to be duped out of thousands of dollars. Sometimes these cases can be frustrating, and leave us asking how the victim missed all of the glaring red flags.
In February last year, the world baulked as the media reported that a South Korean broadcaster had used virtual reality technology to “reunite” a grieving mother with the 7-year old child she lost in 2016.
As part of a documentary entitled I Met You, Jang Ji-sung was confronted by an animated and lifelike vision of her daughter Na-yeon as she played in a neighborhood park in her favorite dress. It was an emotionally charged scene, with the avatar asking the tearful woman, “Mom, where have you been? Have you been thinking of me?”.
“Always”, the mother replied.
Remarkably, documentary makers saw this scene as “heartwarming”, but many felt that something was badly wrong. Ethicists, like Dr. Blaby Whitby from the University of Sussex, cautioned the media: “We just don’t know the psychological effects of being “reunited” with someone in this way.”
It is our human inclination to want to look good. Our desire to impress keeps the fashion industry alive, it also motivates many of us to work or study hard, and there are billions of dollars to be made from our desperation to look visibly fit and healthy. So, it should come as no surprise that as algorithms hold more and more sway over decision-making and the conferral of status (e.g. via credit or hiring decisions), many of us are keen to put our best foot forward and play into their discernible preferences.
On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction?
At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think.
In 2000, a group of researchers at Georgia Tech launched a project they called “The Aware Home.” The collective of computer scientists and engineers built a three-story experimental home with the intent of producing an environment that was “capable of knowing information about itself and the whereabouts and activities of its inhabitants.” The team installed a vast network of “context aware sensors” throughout the house and on wearable computers worn by the home’s occupants. The hope was to establish an entirely new domain of knowledge — one that would create efficiencies in home management, improve health and well-being, and provide support for groups like the elderly.
“GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing.”
Regini Rini, Philosopher
In recent years, the AI circus really has come to town and we’ve been treated to a veritable parade of technical aberrations seeking to dazzle us with their human-like intelligence. Many of these sideshows have been “embodied” AI, where the physical form usually functions as a cunning disguise for a clunky, pre-programmed bot. Like the world’s first “AI anchor”, launched by a Chinese TV network and — how could we ever forget — Sophia, Saudi Arabia’s first robotic citizen.
But last month there was a furore around something altogether more serious. A system The Verge called, “an invention that could end up defining the decade to come.” It’s name is GPT-3, and it could certainly make our future a lot more complicated.
So, what is all the fuss about? And how might this supposed tectonic shift in technological development change the lives of the rest of us ?
With COVID-19 lockdown restrictions issued across the globe, millions of us have been forced to hunker down “in place”, or severely limit our movements outside of the home. On learning this, most will have reached reflexively for the nearest device — if we didn’t learn it from that device to begin with. Yet mostly we are cinched in a love-hate relationship with the presiding artefacts of our time; and often we resent tech’s power over us.
Nevertheless, new circumstances can breed new attitudes. Despite having spent the last few years debating whether or not technology will destroy us, March 2020 could be the month that at least partially redeems our faith in technology by demonstrating how fortunate we are to have some incredibly sophisticated tools in our homes.
For many, they are currently the sole portal to the outside world.
The dust has now settled after the madness of the world’s biggest annual tech fest, the Consumer Electronics Show (CES) in Las Vegas, NV. Since the show’s kick-off in early January, a parade of weird and wonderful new devices have dominated tech news and bylines; from lab produced pork to RollBot, Charmin’s robotic savior for those “stranded on the commode without a roll.”
The event itself really isn’t for the faint-hearted. It’s easy to feel overwhelmed by the sheer volume of companies vying to embed their (often ridiculous) tech gadgetry into our lives – both at work and at play. There is, of course, lots of money to be made from finding that elusive sweet spot; the point at which problem-solving, convenience, and affordability converge.
The following is a guest post byErin Green, PhD, a Brussels-based AI ethics and public engagement specialist. For more on the European scene, check out my recent interview with Hill + Knowlton Strategies “Creating Ethical Rules for AI.”
When it comes to the global AI stage, China and the US consistently grab headlines as their so-called arms race heats up, while countries like Japan and South Korea lead the way in innovation and social receptivity. Europe, though, is taking a slightly different approach – partly by choice, partly by design.
Somewhat independent of these interests, the EU itself is trying to carve out space in terms of regulatory prowess and inbringing coherence to a rather chaotic European AI scene. Think this is a bureaucratic exercise with not much reach or consequence beyond theBerlaymont? Just remember all thoseGDPR emails that clogged up your inbox sometime around May 25, 2018. The EU has real regulatory reach.