If you’re of a certain generation, you might remember the Tamagotchi; the Japanese pocket-sized “pet simulation game” that became the chief obsession of 90s kids bored of yo-yos and other fleeting trends. The Tamagotchi lived mostly in the grubby hands or lint-filled pockets of its owners but, for social currency, could be paraded before envious or competitive enthusiasts.
Oddly, these oviparous virtual critters weren’t remotely animallike in their appearance, and could be intolerably demanding at times. Neglect to feed them, clean up after them, or tend to them when sick and — as many of us found out — very soon you’d be left with nothing but a dead LCD blob. But even the best cared-for Tamagotchi(s?) had certain obsolescence looming in their futures, once their needlessly complex lifecycle was complete: egg, baby, child, teen, adult, death.
Radiologists assessing the pain experienced by osteoarthritis patients typically use a scale called the Kellgren-Lawrence Grade (KLG). The KLG calculates pain levels based on the presence of certain radiographic features, like missing cartilage or damage. But data from the National Institute of Health revealed a disparity between the level of pain as calculated by the KLG and Black patients’ self-reported experience of pain.
The MIT Technology Review explains: “Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.”
Midway through a podcast, a high-energy commercial chirps out all the advantages of using a particular learning system for languages. They are familiar: Babbel can get you conversing in just three weeks, it teaches you phrases you’ll actually use in the real world, lessons are designed to help you remember.
In February last year, the world baulked as the media reported that a South Korean broadcaster had used virtual reality technology to “reunite” a grieving mother with the 7-year old child she lost in 2016.
As part of a documentary entitled I Met You, Jang Ji-sung was confronted by an animated and lifelike vision of her daughter Na-yeon as she played in a neighborhood park in her favorite dress. It was an emotionally charged scene, with the avatar asking the tearful woman, “Mom, where have you been? Have you been thinking of me?”.
“Always”, the mother replied.
Remarkably, documentary makers saw this scene as “heartwarming”, but many felt that something was badly wrong. Ethicists, like Dr. Blaby Whitby from the University of Sussex, cautioned the media: “We just don’t know the psychological effects of being “reunited” with someone in this way.”
It is our human inclination to want to look good. Our desire to impress keeps the fashion industry alive, it also motivates many of us to work or study hard, and there are billions of dollars to be made from our desperation to look visibly fit and healthy. So, it should come as no surprise that as algorithms hold more and more sway over decision-making and the conferral of status (e.g. via credit or hiring decisions), many of us are keen to put our best foot forward and play into their discernible preferences.
On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction?
At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think.
“The degree to which this diversity criminal acts may be enhanced by use of AI depends significantly on how embedded they are in a computational environment: robotics is rapidly advancing, but AI is better suited to participate in a bank fraud than a pub brawl. This preference for the digital rather than the physical world is a weak defence though as contemporary society is profoundly dependent on complex computational networks.”
AI-enabled future crime report
The field of AI ethics has received much (very worthy) attention of late. Once an obscure topic relegated to the sidelines of both tech and ethics conversations, the subject is now at the heart of a lively dialogue among the media, politicians, and even the general public. Everyone now has a perspective on how new technologies can harm human lives, and this can only have a preventative effect in the longterm.
But whether it’s algorithmic bias, intrusive surveillance technology, or social engineering by coercive online platforms, the current discourse tends to center on the overzealous, questionable or destructive use of new tech, rather than outright criminality. Yet it would be foolish to discount the very real prospect of AI being systematically weaponized for unequivocally criminal purposes.
As AI technology refines and multiplies, so do the methods of would-be attackers and fraudsters. And as our world becomes more networked, the attack surface grows and grows.
In short, it is a very exciting time to be a technically-minded crook.
“GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing.”
Regini Rini, Philosopher
In recent years, the AI circus really has come to town and we’ve been treated to a veritable parade of technical aberrations seeking to dazzle us with their human-like intelligence. Many of these sideshows have been “embodied” AI, where the physical form usually functions as a cunning disguise for a clunky, pre-programmed bot. Like the world’s first “AI anchor”, launched by a Chinese TV network and — how could we ever forget — Sophia, Saudi Arabia’s first robotic citizen.
But last month there was a furore around something altogether more serious. A system The Verge called, “an invention that could end up defining the decade to come.” It’s name is GPT-3, and it could certainly make our future a lot more complicated.
So, what is all the fuss about? And how might this supposed tectonic shift in technological development change the lives of the rest of us ?
The following is a guest post byErin Green, PhD, a Brussels-based AI ethics and public engagement specialist. For more on the European scene, check out my recent interview with Hill + Knowlton Strategies “Creating Ethical Rules for AI.”
When it comes to the global AI stage, China and the US consistently grab headlines as their so-called arms race heats up, while countries like Japan and South Korea lead the way in innovation and social receptivity. Europe, though, is taking a slightly different approach – partly by choice, partly by design.
Somewhat independent of these interests, the EU itself is trying to carve out space in terms of regulatory prowess and inbringing coherence to a rather chaotic European AI scene. Think this is a bureaucratic exercise with not much reach or consequence beyond theBerlaymont? Just remember all thoseGDPR emails that clogged up your inbox sometime around May 25, 2018. The EU has real regulatory reach.
Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?”. He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.
The professor’s response?: “I think about three inches.”
Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It is projected that the worldwide market for neurotech products – defined as “the application of electronics and engineering to the human nervous system” – will reach as much as $13.3 billion by 2022.