ChatGPT: A Cautionary Tale (With Some Positive Takeaways)

I haven’t posted in a while. In truth, there hasn’t been a lot that’s piqued my interest, and there are now elaborate global mechanisms and a squadron of eager commentators prepped and ready to address the issues I used to point at on this humble blog. In November, I could’ve written something predictable about the impact of ChatGPT, but I felt like I’d already played that tune back in 2020 when I attempted to summarize the intelligent thoughts of some philosophers.

ChatGPT. GPT-3. Potato. Potato.

The most interesting aspects of this kind of AI are yet to come, I don’t doubt that. But I am here to share a cautionary tale that syncs nicely with my ramblings over the last 5 (5??) years. It’s a story about reliance and truth. About the quest for knowledge, and how it almost always involves some level of fumbling around in the dark, but never more so than now.

The Uncanny Valley and the Meaning of Irony

There has been a lot of discussion about how human is too human when it comes to robots, bots, and other types of disembodied AI voices. An interest in this topic led to a frustrating Google search which led me to…you guessed it…ChatGPT.

What did we ever do without it? I’m starting to forget.

Continue reading

Insidious “corrective” image filters allow app creators to dictate beauty standards

Portrait thought to be of Simonetta Carraneo Vespucci by Sandro Botticelli c.1480-1485.

In the 15th century, Florentine statesman and all-round bigwig Lorenzo d’Medici (also modestly known as “Lorenzo The Magnificent”) made some pretty outspoken comments on the looks and deportment of the ideal Italian Renaissance beauty. Despite himself being described as “quite strikingly ugly“, Lorenzo was rather specific on what should be considered desirable, basing his high standards on celebrated noblewoman Simonetta Carraneo Vespucci. He writes:

of an attractive and ideal height; the tone of her skin, white but not pale, fresh but not glowing; her demeanor was grave but not proud, sweet and pleasing, without frivolity or fear. Her eyes were lively and her gaze restrained, without trace of pride or meanness; her body was so well proportioned, that among other women she appeared dignified…in walking and dancing…and in all her movements she was elegant and attractive; her hands were the most beautiful that Nature could create. She dressed in those fashions which suited a noble and gentle lady…” (Commento del magnifico Lorenzo De’ Medici sopra alcuni de’ suoi sonetti)

Clearly beauty standards have evolved since Lorenzo’s time — and thankfully we’re probably less concerned about the restraint of our gaze and the beauty of our hands — but this notion of one common beauty ideal for women, dictated from without, unfortunately persists. And while Renaissance women agonized about achieving Simonetta’s bodily proportions and alabaster skin, their 21st century counterparts are turning to technological, and even surgical correction to emulate the new, algorithmically dictated standards for attention-worthy good looks.

Continue reading

Will Google’s Controversial LaMDA Help or Hinder Internet Discovery?

In his online Masterclass on the art of writing, renowned journalist Malcolm Gladwell explains the shortcomings of Google when it comes to research and discovery. “The very thing that makes you love Google is why Google is not that useful“, he chirps. To Gladwell, a Google search is but a dead-end when a true researcher wants to be led “somewhere new and unexpected“.

In juxtaposition to Google’s search engine stands ye olde library, which Gladwell calls the “physical version of the internet” (sans some of the more sophisticated smut…). In a library — should it be required — guidance is on-hand in the form of a librarian, and unlike the internet there is a delightful order to things that the writer likens to a good conversation. Discovery can be as simple as finding what books surround the book that inspired you…and following the trail. Gladwell elucidates: “The book that’s right next to the book is the book that’s most like it, and then the book that’s right next to that one is a little bit different, and by the time you get ten books away you’re getting into a book that’s in the same general area but even more different.”

There is something altogether more natural and relational about uncovering the new — and the forgotten — in the context of a library or a conversation. Hidden gems lay undisturbed, unlike popularity-ranked internet search results that spew out the obvious and the familiar.

Enter LaMDA AI.

Continue reading

Klara and The Sun: Love, Loyalty & Obsolescence

If you’re of a certain generation, you might remember the Tamagotchi; the Japanese pocket-sized “pet simulation game” that became the chief obsession of 90s kids bored of yo-yos and other fleeting trends. The Tamagotchi lived mostly in the grubby hands or lint-filled pockets of its owners but, for social currency, could be paraded before envious or competitive enthusiasts. 

Oddly, these oviparous virtual critters weren’t remotely animallike in their appearance, and could be intolerably demanding at times. Neglect to feed them, clean up after them, or tend to them when sick and — as many of us found out — very soon you’d be left with nothing but a dead LCD blob. But even the best cared-for Tamagotchi(s?) had certain obsolescence looming in their futures, once their needlessly complex lifecycle was complete: egg, baby, child, teen, adult, death. 

Continue reading

AI Ethics for Startups – 7 Practical Steps

Radiologists assessing the pain experienced by osteoarthritis patients typically use a scale called the Kellgren-Lawrence Grade (KLG). The KLG calculates pain levels based on the presence of certain radiographic features, like missing cartilage or damage. But data from the National Institute of Health revealed a disparity between the level of pain as calculated by the KLG and Black patients’ self-reported experience of pain.

The MIT Technology Review explains: “Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.”

But why?

Continue reading

Deepfaking the Deceased: Is it Ever Okay?

In February last year, the world baulked as the media reported that a South Korean broadcaster had used virtual reality technology to “reunite” a grieving mother with the 7-year old child she lost in 2016. 

YouTube.com

As part of a documentary entitled I Met You, Jang Ji-sung was confronted by an animated and lifelike vision of her daughter Na-yeon as she played in a neighborhood park in her favorite dress. It was an emotionally charged scene, with the avatar asking the tearful woman, “Mom, where have you been? Have you been thinking of me?”

“Always”, the mother replied. 

Remarkably, documentary makers saw this scene as “heartwarming”, but many felt that something was badly wrong. Ethicists, like Dr. Blaby Whitby from the University of Sussex, cautioned the media: “We just don’t know the psychological effects of being “reunited” with someone in this way.”

Indeed, this was unchartered territory. 

Continue reading

Playing to the Algorithm: Are We Training the Machines or…?

It is our human inclination to want to look good. Our desire to impress keeps the fashion industry alive, it also motivates many of us to work or study hard, and there are billions of dollars to be made from our desperation to look visibly fit and healthy. So, it should come as no surprise that as algorithms hold more and more sway over decision-making and the conferral of status (e.g. via credit or hiring decisions), many of us are keen to put our best foot forward and play into their discernible preferences. 

This is certainly true of those in business, as discovered by the authors of the working paper How to Talk When A Machine is Listening: Corporate Disclosure in the Age of AI. An article posted by the National Bureau of Economic Research describes the study’s findings:

Continue reading

How Do We Solve A Problem Like Election Prediction?

On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction? 

At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think. 

Continue reading

Intentional Harm: Preparing for an Onslaught of AI-Enabled Crime

“The degree to which this diversity criminal acts may be enhanced by use of AI depends significantly on how embedded they are in a computational environment: robotics is rapidly advancing, but AI is better suited to participate in a bank fraud than a pub brawl. This preference for the digital rather than the physical world is a weak defence though as contemporary society is profoundly dependent on complex computational networks.”

AI-enabled future crime report

The field of AI ethics has received much (very worthy) attention of late. Once an obscure topic relegated to the sidelines of both tech and ethics conversations, the subject is now at the heart of a lively dialogue among the media, politicians, and even the general public. Everyone now has a perspective on how new technologies can harm human lives, and this can only have a preventative effect in the longterm. 

But whether it’s algorithmic bias, intrusive surveillance technology, or social engineering by coercive online platforms, the current discourse tends to center on the overzealous, questionable or destructive use of new tech, rather than outright criminality. Yet it would be foolish to discount the very real prospect of AI being systematically weaponized for unequivocally criminal purposes. 

As AI technology refines and multiplies, so do the methods of would-be attackers and fraudsters. And as our world becomes more networked, the attack surface grows and grows. 

In short, it is a very exciting time to be a technically-minded crook. 

Continue reading