Are we being made into 21st century “puppets” by our online masters?


In a recent Guardian article, ex-Google strategist James Williams describes the persuasive, algorithmic tools of the internet giants – like Facebook’s newsfeed, Google’s search results, etc. – as the “largest, most standardized and most centralized form of attentional control in human history”. He is not alone in his concern. Increasingly, more interest is being taken in the subtle tactics that social media and other platforms use to attract and keep our attention, guide our purchasing decisions, control what we read (and when we read it), and generally manipulate our attitudes and behaviors.

The success of platforms like Facebook and Twitter has really been down to their ability to keep us coming back for more. For this, they have turned habit formation into a technological industry. Notifications, “likes”, instant play videos, messengers, Snapstreaks – these are but a few of the ways in which they lure us in and, critically, keep us there for hours at a time. According to research, on average we touch or swipe our phones 2,617 per day. In short, most of us are compulsive smartphone addicts. So much so, that whole new trends are being built around shunning phones and tablets with the hopes of improving our focus on other, arguably more important, things like physical interactions with our friends and family.

Nevertheless, such movements are unlikely to inspire an overnight U-turn when it comes to our online habits. There are whole new generations of people who have been born into this world and do not know anything other than smartphone/tablet compulsion.  This point is made beautifully by Jean-Luis Constanza, a top telecoms executive who uploaded a YouTube video of his baby daughter prodding at images in a magazine. He comments: “In the eyes of my one-year old daughter, a magazine is a broken iPad. That will remain the case throughout her entire life. Steve Jobs programmed part of her operating system.”

Consequently, the internet giants (by which I mean Facebook, Google, Twitter, Apple, Snapchat, etc.) have an enormous amount of power over what we see and read, and consequently what we buy, how we vote, and our general attitudes to people, places, and things. Concerned parties argue that these company’s current methods of subtly manipulating what they push out to us, and what they conceal from us, could equate to an abuse of their ethical responsibility. There is a power asymmetry which perhaps leads to Joe Public becoming de-humanized, as well as treated as sort of “techno-subjects” for the experimental methods of big tech.

Most of what allows these firms to know so much about us, and then to capitalize on this granular knowledge, is the constant feedback loop which supplies the metrics, which in in-turn enable the algorithms to change and adapt what we are served on the internet. This is something we willingly participate in. The feedback comprises of data about what we’ve clicked, shared, browsed, liked, favorited, or commented on it the past.  This same loop can also be used to anticipate what we might like, and to coerce us into new decisions or to react to different stimuli which – you guessed it – supplies them with even more information about “people like us”. The constant modification and refinement of our preferences, it is argued, not only creates a sort of filter bubble around us, but also stifles our autonomy in terms of limiting the options being made available to us. Our view is personalized for us based on secret assumptions that have been made about us…and, of course, commercial objectives.

Karen Yeung, of the Dickson Pool of Law at King’s College London, calls such methods of controlling what we’re exposed to digital decision guidance processes – also known by the rather jazzier title, algorithmic hypernudge. The latter pays homage to the bestselling book “Nudge” by Cass Sunstein and Richard Thaler, which talks about the ways in which subtle changes to an individual’s “choice architecture” could cause desirable behavior changes without the need for regulation. For example, putting salads at eye level in a store apparently increases the likelihood we will choose salad, but doesn’t forbid us from opting for a burger. It is a non-rational type of influence. What makes the online version of nudge more pernicious, according to Yeung, is that, a) the algorithms behind a nudge on Google or Facebook are not working towards some admirable societal goal, but rather they are programmed to optimize profits, and b) the constant feedback and refinement allows for a particularly penetrating and inescapable personalization of the behavior change mechanisms. In short, it is almost like a kind of subliminal effect, leading to deception and non-rational decision-making which, in Yeung’s words: “express contempt and disrespect for individuals as autonomous.”

So, given that our ability to walk away is getting weaker, are we still in control? Or are we being manipulated by other forces sat far away from most of us in California offices? Silicon Valley “conscience” Tristan Harris is adamant about the power imbalance here: “A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today. I don’t know a more urgent problem than this.” Harris says there “is no ethics” and vast reams of information these giants are privy to could also allow them to exploit the vulnerable.

This is a big topic with lots of work to be done, but perhaps the key to understanding whether not we are truly being manipulated is to understand in what way methods like algorithmic hypernudge undermine our reason (Williams says that they cause us to privilege impulse over reason). If we are being coerced into behaving in ways that fall short of our expectations or standards of human rationality, then it seems obvious there are follow-on ethical implications. If I do things against my will and my own better judgment – or my process of judgment is in some way compromised – it seems fair to say I am being controlled by external forces.

But perhaps that is not enough, after all, external influences have always played into our decision-making. From overt advertising, to good smelling food, to the way something (or someone!) looks. We are already accustomed to making perfectly rational decisions on the basis of non-rational influences. Just because we behave in a way that we didn’t originally plan, doesn’t mean to say that the action is itself irrational. That isn’t to say that there isn’t something going on – apparently 87% of people go to sleep and wake up with their smartphones – it is just to point out that if we’re going to use claims of psychological manipulation, we also need to be clear in where this happens and how it manifests itself. Perhaps most importantly, we need to properly identify how the consequences differ significantly from other types of unconscious persuasion.  When and how are these online influences harming us…? That’s the question.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s