In the 15th century, Florentine statesman and all-round bigwig Lorenzo d’Medici (also modestly known as “Lorenzo The Magnificent”) made some pretty outspoken comments on the looks and deportment of the ideal Italian Renaissance beauty. Despite himself being described as “quite strikingly ugly“, Lorenzo was rather specific on what should be considered desirable, basing his high standards on celebrated noblewoman Simonetta Carraneo Vespucci. He writes:
“of an attractive and ideal height; the tone of her skin, white but not pale, fresh but not glowing; her demeanor was grave but not proud, sweet and pleasing, without frivolity or fear. Her eyes were lively and her gaze restrained, without trace of pride or meanness; her body was so well proportioned, that among other women she appeared dignified…in walking and dancing…and in all her movements she was elegant and attractive; her hands were the most beautiful that Nature could create. She dressed in those fashions which suited a noble and gentle lady…” (Commento del magnifico Lorenzo De’ Medici sopra alcuni de’ suoi sonetti)
Clearly beauty standards have evolved since Lorenzo’s time — and thankfully we’re probably less concerned about the restraint of our gaze and the beauty of our hands — but this notion of one common beauty ideal for women, dictated from without, unfortunately persists. And while Renaissance women agonized about achieving Simonetta’s bodily proportions and alabaster skin, their 21st century counterparts are turning to technological, and even surgical correction to emulate the new, algorithmically dictated standards for attention-worthy good looks.
In his online Masterclass on the art of writing, renowned journalist Malcolm Gladwell explains the shortcomings of Google when it comes to research and discovery. “The very thing that makes you love Google is why Google is not that useful“, he chirps. To Gladwell, a Google search is but a dead-end when a true researcher wants to be led “somewhere new and unexpected“.
In juxtaposition to Google’s search engine stands ye olde library, which Gladwell calls the “physical version of the internet” (sans some of the more sophisticated smut…). In a library — should it be required — guidance is on-hand in the form of a librarian, and unlike the internet there is a delightful order to things that the writer likens to a good conversation. Discovery can be as simple as finding what books surround the book that inspired you…and following the trail. Gladwell elucidates: “The book that’s right next to the book is the book that’s most like it, and then the book that’s right next to that one is a little bit different, and by the time you get ten books away you’re getting into a book that’s in the same general area but even more different.”
There is something altogether more natural and relational about uncovering the new — and the forgotten — in the context of a library or a conversation. Hidden gems lay undisturbed, unlike popularity-ranked internet search results that spew out the obvious and the familiar.
If you’re of a certain generation, you might remember the Tamagotchi; the Japanese pocket-sized “pet simulation game” that became the chief obsession of 90s kids bored of yo-yos and other fleeting trends. The Tamagotchi lived mostly in the grubby hands or lint-filled pockets of its owners but, for social currency, could be paraded before envious or competitive enthusiasts.
Oddly, these oviparous virtual critters weren’t remotely animallike in their appearance, and could be intolerably demanding at times. Neglect to feed them, clean up after them, or tend to them when sick and — as many of us found out — very soon you’d be left with nothing but a dead LCD blob. But even the best cared-for Tamagotchi(s?) had certain obsolescence looming in their futures, once their needlessly complex lifecycle was complete: egg, baby, child, teen, adult, death.
Radiologists assessing the pain experienced by osteoarthritis patients typically use a scale called the Kellgren-Lawrence Grade (KLG). The KLG calculates pain levels based on the presence of certain radiographic features, like missing cartilage or damage. But data from the National Institute of Health revealed a disparity between the level of pain as calculated by the KLG and Black patients’ self-reported experience of pain.
The MIT Technology Review explains: “Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.”
Midway through a podcast, a high-energy commercial chirps out all the advantages of using a particular learning system for languages. They are familiar: Babbel can get you conversing in just three weeks, it teaches you phrases you’ll actually use in the real world, lessons are designed to help you remember.
In February last year, the world baulked as the media reported that a South Korean broadcaster had used virtual reality technology to “reunite” a grieving mother with the 7-year old child she lost in 2016.
As part of a documentary entitled I Met You, Jang Ji-sung was confronted by an animated and lifelike vision of her daughter Na-yeon as she played in a neighborhood park in her favorite dress. It was an emotionally charged scene, with the avatar asking the tearful woman, “Mom, where have you been? Have you been thinking of me?”.
“Always”, the mother replied.
Remarkably, documentary makers saw this scene as “heartwarming”, but many felt that something was badly wrong. Ethicists, like Dr. Blaby Whitby from the University of Sussex, cautioned the media: “We just don’t know the psychological effects of being “reunited” with someone in this way.”
It is our human inclination to want to look good. Our desire to impress keeps the fashion industry alive, it also motivates many of us to work or study hard, and there are billions of dollars to be made from our desperation to look visibly fit and healthy. So, it should come as no surprise that as algorithms hold more and more sway over decision-making and the conferral of status (e.g. via credit or hiring decisions), many of us are keen to put our best foot forward and play into their discernible preferences.
On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction?
At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think.
“The degree to which this diversity criminal acts may be enhanced by use of AI depends significantly on how embedded they are in a computational environment: robotics is rapidly advancing, but AI is better suited to participate in a bank fraud than a pub brawl. This preference for the digital rather than the physical world is a weak defence though as contemporary society is profoundly dependent on complex computational networks.”
AI-enabled future crime report
The field of AI ethics has received much (very worthy) attention of late. Once an obscure topic relegated to the sidelines of both tech and ethics conversations, the subject is now at the heart of a lively dialogue among the media, politicians, and even the general public. Everyone now has a perspective on how new technologies can harm human lives, and this can only have a preventative effect in the longterm.
But whether it’s algorithmic bias, intrusive surveillance technology, or social engineering by coercive online platforms, the current discourse tends to center on the overzealous, questionable or destructive use of new tech, rather than outright criminality. Yet it would be foolish to discount the very real prospect of AI being systematically weaponized for unequivocally criminal purposes.
As AI technology refines and multiplies, so do the methods of would-be attackers and fraudsters. And as our world becomes more networked, the attack surface grows and grows.
In short, it is a very exciting time to be a technically-minded crook.
“GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing.”
Regini Rini, Philosopher
In recent years, the AI circus really has come to town and we’ve been treated to a veritable parade of technical aberrations seeking to dazzle us with their human-like intelligence. Many of these sideshows have been “embodied” AI, where the physical form usually functions as a cunning disguise for a clunky, pre-programmed bot. Like the world’s first “AI anchor”, launched by a Chinese TV network and — how could we ever forget — Sophia, Saudi Arabia’s first robotic citizen.
But last month there was a furore around something altogether more serious. A system The Verge called, “an invention that could end up defining the decade to come.” It’s name is GPT-3, and it could certainly make our future a lot more complicated.
So, what is all the fuss about? And how might this supposed tectonic shift in technological development change the lives of the rest of us ?