Do Our AI Assistants Need To Be Warm And Fuzzy?

Open the tech news on any given day and you’re almost guaranteed to find something about conversational AI or Natural Language Processing (NLP). This is the tech that powers chatbots, virtual assistants and the likes as they mimic human interaction. As this blog has noted, complex language models have come on leaps and bounds recently, and our future as users is becoming clear: we’ll be holding (reasonably) natural conversations with non-human bots on a regular basis, and for a variety of reasons.

The shadows on the cave wall — if not yet the fully realized Platonic form of conversational AI — can already be made out. Want banking tips? Ask Erica. Legal advice? There are bots like April. Want to engage your students? Juji thinks it can help.

You get the picture. It’s going to be everywhere (yes, even more than now) and we have good reason to believe it will be an awful lot better than what we’ve become accustomed to…

A agonizingly unprofitable exchange with the Wish customer service bot

So, assuming that conversational AI will soon be much improved, and far more widespread, how realistically humanlike do we wish these chatty bots to be? Must this software be warm, humorous or kind for us to really embrace it? Over the years tech developers have shown themselves to be obsessed with making tech more human, so it’s interesting to learn that various researchers have been trying to get to the bottom of this question.

One notable effort came recently from a group of experimenters out of Huazhong University of Science and Technology in China, City University of Hong Kong, and the EMLYON Business School in France. They ran a study to examine how perceived artificial autonomy influences our use of what they call “IPAs” — intelligent personal assistants. The authors assert that, “it is essential to understand further how the artificial autonomy of IPAs fosters the perception of them as humanlike and influences users’ continuance usage intention toward IPAs.”

Drawing on something called mind perception theory, which considers what causes people to perceive human minds in non-human entities, the researchers propose that the artificial autonomy we experience in our intelligent assistants cause us to infer their ability to do (i.e. have agency) and to feel (i.e. experience). These in turn correspond to two fundamental social judgments — competence and warmth.

When both competence and warmth are detected in non-human entities, we think of them as more positively humanlike.

As users, we typically think an AI is competent when it can complete tasks independently, solve problems for us and help us achieve our goals. We think of them as warm when we observe them to be kind, caring, and friendly — often fostered by anthropomorphism. More specifically, the authors say that we think of assistants as kind and friendly when they appear to show concern by being “constantly ready for users’ commands, actively monitoring users’ needs whenever desired and detecting abnormal and sudden alterations in the conditions in their surroundings.”

The researchers then set out to better understand which feature — warmth or competence — matters more in terms of it’s influence on us as keen and continued users of smart assistants. Do we value performance more? Or do we really want our AIs to give us a virtual hug (à la Klara and The Sun)?

To do this, the scholars evaluated descriptors used in comments about Xiaomi Classmate, a popular Chinese IPA, to see if it’s value was largely described with competence-related terms (e.g. “accuracy”, “multi-functionality”) or warmth-related terms (e.g. “playfulness”, “friendliness”). They backed this up with a thorough questionnaire for IPA-users on a popular crowdsourcing platform and some modeling. [Please see the paper for full methodology].

In the end, the study found that artificial autonomy itself conveys both competence and warmth, but the former is really more significant to users. In other words, we care if our conversational AI assistants work well much more than we care if it comes across as kind or friendly or thoughtful. If you think this is unsurprising you should know that in human service arenas — like restaurants — there has been much study and debate over which of these features has the biggest impact on our experience. With conversational AI, developers have been at pains to cultivate both.

The research concludes that “IPAs are considered professional personal assistants rather than intimate friends“, emphasizing: “The critical point we found is that competence perception facilitates stronger usage continuance than warmth perception does in IPAs. Thus, service providers who want to promote the continuing usage of IPAs should pay more attention to the features that enhance their competence perception of IPAs than those that enhance their warmth perception.”

Clippy was ostensibly lovely…but also notoriously irritating

So, there you have it. Though we don’t want our assistants to be brutal or offhand, it seems we also don’t need them to be especially cute and cuddly. We learned from our experiences with Clippy and embodied bot Jibo that sweet is no substitute for useful. As the world gets conversational AI-tis, it will be interesting to see how seriously warmth is taken versus competence and — frankly — just getting the job done.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s