Making AI in our own image is a mistake

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

humanoid.jpg

When the Chinese news agency Xinhua demonstrated an AI anchorperson, the reaction of the internet was predictably voluble. Was this a gimmick or a sign of things to come? Could the Chinese government literally be turning to artificial puppets to control the editorial content of the country’s news channels? Are we careening towards a future where the humans and humanoid bots are indistinguishable?

Released in early November, the footage (above) shows a rather dour—but fairly convincing—young male newsreader staring straight into the camera, his lips not quite syncing with the robotic sounding audio that acts as his voice.

It’s good, but it’s not that good.

An article in The Verge explains the basics of the technology, unpacking how Chinese engineers have produced this synthetic character:

“It seems that Xinhua has used footage of human anchors as a base layer, and then animated parts of the mouth and face to turn the speaker into a virtual puppet. By combining this with a synthesized voice, Xinhua can program the digital anchors to read the news, far quicker than using traditional CGI.”

It’s the sort of AI alchemy we’ve become accustomed to with the proliferation of so-called “deepfakes,” and yet the motivation behind it is not obvious. As reporter James Vincent notes, “If Xinhua wants someone to read the news without questioning it they don’t need AI to make that happen.”

The AI news anchor is not an isolated incident of technology being used to replicate humanity. These days we see a similar phenomenon almost everywhere we look; from our customer service bots like Erica, to the Google Duplex debacle, and robotic citizens like Sophia of Arabia. We are perpetually looking to craft AI our own image—but why? The one thing we have in spades are humans. Do we really need to create phony ones too?

Anyone would think we were missing the point.

Artificial intelligence is great. It can complete some tasks at a speed and level of detail no human could ever hope to achieve. It can sweep through enormous data banks —more vast than our brains can even comprehend— in minutes or even seconds, to identify patterns or trends that have been concealed until now. When these pertain to human health or behavior or other relevant predictions then the findings can be game-changing.

Moreover, technology can come to know us. To offer suggestions based on our likes, dislikes, and needs. It can learn and it can evolve. It can automate this behavior, too, taking arduous tasks. It can do all this, and so many, many other things: things requiring a deeper understanding of images, spaces, distances, temperature, climate, cells, and more. There is not enough room to list them here.

In many cases, AI is superhuman in its very special knowledge of a very specific task. Or perhaps superhuman is the wrong word. It is simply not human. Its skills are not like ours, and that’s fine because we similarly have skills that artificially intelligent systems cannot hope to replicate.

We have true semantic understanding, soft skills, intuition, real empathy.  In short, we have humanity. So why is it we refuse to let our technology complement us? Why do we continue to insist that it reproduce us? Why do we need Sophias, and news anchors, and Magic Leap’s Mica?

A photo of Magic Leap's "Mica," a digital avatar that copies human behavior. (Image credit: Magic Leap)

As authenticity becomes more important in a world populated with the fake and the deceptive, shouldn’t our product designers think seriously about emphasizing the differences between the real and the artificial? Particularly when many of these humanoids seem so unnecessary and lacking in the basic components—or the essence—of humanity.

If we turn our focus away from the narcissistic work of recreating ourselves, who knows what we might achieve. It is time to let AI be distinctively AI in the way that we are distinctively human.

One thought on “Making AI in our own image is a mistake

  1. Pingback: Do Our AI Assistants Need To Be Warm And Fuzzy? | You The Data

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s