The Problem with Next Generation Virtual Assistants

33433114056_ff8bc048f1_b

It may not seem like it, but there is quite an arms race going on when it comes to interactive AI and virtual assistants. Every tech company wants their offering to be more intuitive…more human. Yet although they’re improving, voice activated tech like Alexa and Siri are still pretty clunky, and often underwhelming in their interactions.

This obviously isn’t great if developers want to see them entering the workplace in such a way as to supercharge sales. 

Many would say that we should just accept this interactive inadequacy. After all, humans have yet to develop a computational interface capable of passing the Turing Test, which was imagined as long ago as the 1950s. The fact is, in spite of our rising expectations, computers still aren’t very good at pretending to be human. And that’s because they simply do not have the intellectual complexity to mimic conversation and all of its nuances.

Still, we forge ahead and – sending chills down competitors’ spines – last week Chinese tech giant Huawei announced its plans to develop an “emotionally interactive” virtual pal. According to reports, their proposed AI would detect an user’s emotions and moods, and then use those cues to tailor a more personalized service.

Data points for our innermost feelings will apparently come from facial expressions, vocal tone and our general behaviors. In fact, this emotional instruction works in much the same way as the controversial emotion-tracking HR systems currently being introduced into the recruitment process. The difference is that those driving this idea are keen for these personalized AI to become our companion-bots too…

So, with virtual assistants multiplying in our homes, and soon our offices, emotion-tracking like this could well be a constant of the very near future. But is that okay?

It might be fine

As with all new and emerging tech, there might be no hideous fallout awaiting us. The AI might track our emotions accurately, cater to them, and improve our lives. That’s the dream.

But is it?

How healthy is it to have an emotion-fueled relationship with a piece of software? To interact with it as though it were human, and to treat it as though it doesn’t simply know, but actually understands. If, as Huawei hope, we turn these bots into confidants, then whoever trains and maintains their programs will have an extended responsibility to their customers that relates directly to their key emotional faculties.

As the line between technological artifact and interactive companion blurs, so does the relationship between the manufacturer and customer. Faulty systems or poorly trained algorithms could result in real harms that go well beyond broken plastic or poor battery life.

Data, data, data

If a system is tracking and adapting to my emotional response data, then it is also harvesting and manipulating it too. Presumably, there is incredible value attached to understanding the great plethora of human emotion as-and-when it occurs in reaction to stimuli? Outside of reading our brainwaves, this sort of information would undoubtedly be manna from heaven for advertisers, governments, campaign groups and others.

And data like this is not only useful with regards to my individual preferences and whimsies. Like all data, it will can – and would – be aggregated and used to infer things about the emotions and moods of whole demographic sets. How do middle-class women react to X? African American males tend to react with skepticism to Y. Small children (who will unavoidably be captured via interactions in the home) cry when exposed to Z.

These assignations could be useful and positive. They may also be harmful generalizations. Again, we will be handing over a lot of power.

The digital divide

If systems are going to learn to “read” emotions, then this function will ultimately be trained by its many users. This means the framework of emotional instruction that is understandable to the computer will inevitably be dictated by those who have access to that artificial intelligence to begin with.

Such people tend to be in global top percentages when it comes to health, wealth and quality of life. That’s not to say that there’s anything wrong with these people teaching the machines they use, but it is simply to warn that as technology trickles out to other nations, cultures and places, there is a chance that it could end up transposing different morals and values onto those societies.

This problem is not unique to emotion-driven virtual assistants, but the introduction of emotion – as opposed to mere instruction – does raise the stakes in terms of exactly what roles these technologies will take on down the line. How can developers even account for smaller minorities without a critical mass of examples on which to train their models? What types of behavior will be lost or simply misread in the haze?

We could be walking towards a kind of homogeneity that suits tech companies, but ultimately compromises the richness of our humanity. Let’s not wait for the horses to bolt before we attempt to close any important stable doors.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s