Peer pressure: An unintended consequence of AI

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Peer pressure

Last winter, Kylie Jenner tweeted that she stopped using Snapchat, and almost immediately the company’s shares dropped six-percent, losing $1.3 billion in value. Her seemingly innocent comments had led investors to believe that the 20-year-old’s 25 million followers would do the same, and the knock-on effect would seal the social media apps fate as a “has been” among its key demographic of younger women.

This astonishing event demonstrates in technicolor how the notion of influence is evolving, latterly taking on a new significance. In the age of technology, though influence is still associated with power, it is no longer the limited reserve of “the Powerful”—i.e. those in recognized positions of authority, like bankers, lawyers, or politicians.

Kylie Jenner does not hold sway in the corridors of power in Washington, D.C. Nor can she can walk into the c-suite of Walmart and demand a rebrand. But through the power of technology and its media platforms, she is able to influence the actions and behaviors of millions of young people.

In different but comparable ways technology enables this kind of behavioral influence all around us. It makes us get up and go for a run. It convinces us to buy things. It sits on our desk as an object of constant temptation, fluttering its eyelids and inviting us to check our messages, emails, notifications, and feeds as many as 2,617 times each day.

Though there is little doubt that technology is now the greatest conduit for influence—and has been for some time—we are really only just beginning to understand its implications..

Peer pressure from AI

Now, if we—the grown-ups, many of whom lived for years without these distractions—cannot defy the influence of technology, what hope is there for our techno-children? Those who have been raised amongst iGadgets of all types? For whom, to reference this special little video, a magazine is but a broken tablet device?

The answer is probably “very little,” and this is keenly illustrated by an experiment showing how robots and AI can exert persuasive social influenceon kids. Researchers found that children will succumb to peer pressure even if their instruction comes from an artificial (read: robot) peer. Small robots were able to convince groups of children to give incorrect responses in a reinterpretation of psychologist Solomon Asch’s famous conformity experiment.

The implications of this are important. If robots can give children instruction, then we need to examine what instructions all technologies are giving in much more detail, as well as anything else—visual cues, sounds—that could be perceived as direction. We also need to establish the extent to which this influence applies to adults, given how impressionable we clearly are. Though in this particular study adults resisted robot peer pressure, other research has demonstrated that they can be manipulated under other conditions.In the original, Asch told actors playing participants to deliberately answer falsely on a vision test, thus influencing the real experimental subject who then did the same, despite the correct response being obvious. This is one of many methods used by psychologists to test for conformity.

I’m not implying, of course, that adults with all their faculties will be inclined to jump off cliffs if instructed to do so by an AI. I am saying that technology is capable of suggestive nuances that often fly under the radar. And that we need a mapping exercise to identify where these influences are and the strength of their effects.

Of course, that children are susceptible to technological influence is not “new news.” Fascinatingly, a 2009 study by Stanford psychologists found that school children who saw their virtual doppelganger swimming with orca whales recounted it as having happened in real life. Such is the power of the technologies that already exist.

Your child’s new friends: Siri, Alexa, and Jibo

What does feel new is that we’re now talking about studies like this in the context of a much wider debate about ethics and technology. This is happening at a time when we’re building Alexas and Jibos into our lives and holding full conversations with chatbots. As we rapidly install convenient and consumer-friendly tech, we may also be laboring for the architects of a new wave of human behavior change—not robots as such, but their programmers and designers.

That’s right. There’s a lot of power at play here, and those who wield it are not politicians or statespeople. They are not democratically elected or purposefully anointed. They may even be oblivious to their power, but in fact, they could be considered atypically powerful. They are the creators and decision-makers behind influencing machines, and they need to be conscientious guardians of the power they wield. Ignorance is not a defense now that we can track a direct route from suggestion to action.

In light of experiments like these (and you can bet that there will be hundreds more), developers and designers need to be aware of anything that could present as a normative direction to children (and adults, too). They must stress-test their intelligent objects to ensure that there are no subtle unforeseen consequences that might deny a user full autonomy.

If they don’t, then those techno-children could have their decisions (and characters) shaped in unpredictable and undesirable ways—and those of us who believe that technology moves us forward with positivity will be cruelly contradicted.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s