Back in 2016, James Vlahos spent many months building the ambitious piece of software he ended up calling Dadbot. It was a heartfelt attempt to preserve the memories of his then dying father by assimilating them into a chatbot that he and his family could converse with in their grief, and in the years to come. A journalist by trade, James wrote the full story of this journey in WIRED, and if you don’t shed a tear while reading it you are *officially* dead inside.
James now helps others memorialize themselves and loved ones in a similar fashion with his business Hereafter, which is part of a wider tech-driven trend for conserving legacy (see Vita and Storyworth).
However, outside of these clear and obvious reasons for recording and encoding human lives, the WSJ reported last week that there is a parallel movement to use AI and digital technologies to somehow prolong the livesof the subjects in question: “…not only as static replicas for the benefit of their loved ones but as evolving digital entities that may steer companies or influence world events.“
Open the tech news on any given day and you’re almost guaranteed to find something about conversational AI or Natural Language Processing (NLP). This is the tech that powers chatbots, virtual assistants and the likes as they mimic human interaction. As this blog has noted, complex language models have come on leaps and bounds recently, and our future as users is becoming clear: we’ll be holding (reasonably) natural conversations with non-human bots on a regular basis, and for a variety of reasons.
The shadows on the cave wall — if not yet the fully realized Platonic form of conversational AI — can already be made out. Want banking tips? Ask Erica. Legal advice? There are bots like April. Want to engage your students? Juji thinks it can help.
If you’re of a certain generation, you might remember the Tamagotchi; the Japanese pocket-sized “pet simulation game” that became the chief obsession of 90s kids bored of yo-yos and other fleeting trends. The Tamagotchi lived mostly in the grubby hands or lint-filled pockets of its owners but, for social currency, could be paraded before envious or competitive enthusiasts.
Oddly, these oviparous virtual critters weren’t remotely animallike in their appearance, and could be intolerably demanding at times. Neglect to feed them, clean up after them, or tend to them when sick and — as many of us found out — very soon you’d be left with nothing but a dead LCD blob. But even the best cared-for Tamagotchi(s?) had certain obsolescence looming in their futures, once their needlessly complex lifecycle was complete: egg, baby, child, teen, adult, death.
Cathy O’Neil’s now infamous book, Weapons of Math Destruction, talks about the pernicious feedback loop that can result from contentious “predictive policing” AI. She warns that the models at the heart of this technology can sometimes reflect damaging historical biases learned from police records that are used as training data.
For example, it is perfectly possible for a neighborhood to have a higher number of recorded arrests due to past aggressive or racist policing policies, rather than a particularly high instance of crime. But the unthinking algorithm doesn’t recognize this untold story and will blindly forge ahead, predicting the future will mirror the past and recommending the deployment more police to these “hotspot” areas.
Naturally, the police then make more arrests on these sites, and the net result is that the algorithm receives data that makes its association to grow even stronger.
We’ve all seen the stories and allegations of Russian bots manipulating the Trump-Clinton US election and, most recently, the FCC debate on net neutrality. Yet far from such high stakes arenas, there’s good reason to believe these automated pests are also contaminating data used by firms and governments to understand who we (the humans) are, as well as what we like and need with regard to a broad range of things… Continue reading →