Misinformation About Misinformation?: Report Advises “Don’t Hit Delete”

It’s hard to remember a time before we spoke animatedly about fake news and misinformation. For years now there has been primetime public discussion about the divisiveness of online content, and the way social media platforms can effortlessly propagate harmful conspiracy theories, as well as other baseless assertions masquerading as facts.

In 2018, Dictionary.com announced that misinformation was its “word of the year,” and before that scholars like Caroline Jack made valiant efforts to define the many types of online deceit, as in her 2017 study, Lexicon of Lies.

With a certain amount of discomfort, we have come to accept the downstream effects of users being trapped in “echo chambers” and the “filter bubbles” that reinforce and amplify false and harmful dialogue (with potentially devastating real-world consequences).

Many organizations — from NGOs to Big Tech — have pledged to fight misinformation and the circumstances that catalyze it’s spread, and there have been loud calls to identify and remove misleading content. When COVID-19 came along, ensuring scientific information wasn’t drowned out by falsehoods became a matter of life and death, and many platforms did axe posts to protect users (see YouTube and Facebook).

It is curious, then, that a new report by The Royal Society named The Online Information Environment calls into question some popular assumptions about misinformation.

Continue reading

Insidious “corrective” image filters allow app creators to dictate beauty standards

Portrait thought to be of Simonetta Carraneo Vespucci by Sandro Botticelli c.1480-1485.

In the 15th century, Florentine statesman and all-round bigwig Lorenzo d’Medici (also modestly known as “Lorenzo The Magnificent”) made some pretty outspoken comments on the looks and deportment of the ideal Italian Renaissance beauty. Despite himself being described as “quite strikingly ugly“, Lorenzo was rather specific on what should be considered desirable, basing his high standards on celebrated noblewoman Simonetta Carraneo Vespucci. He writes:

of an attractive and ideal height; the tone of her skin, white but not pale, fresh but not glowing; her demeanor was grave but not proud, sweet and pleasing, without frivolity or fear. Her eyes were lively and her gaze restrained, without trace of pride or meanness; her body was so well proportioned, that among other women she appeared dignified…in walking and dancing…and in all her movements she was elegant and attractive; her hands were the most beautiful that Nature could create. She dressed in those fashions which suited a noble and gentle lady…” (Commento del magnifico Lorenzo De’ Medici sopra alcuni de’ suoi sonetti)

Clearly beauty standards have evolved since Lorenzo’s time — and thankfully we’re probably less concerned about the restraint of our gaze and the beauty of our hands — but this notion of one common beauty ideal for women, dictated from without, unfortunately persists. And while Renaissance women agonized about achieving Simonetta’s bodily proportions and alabaster skin, their 21st century counterparts are turning to technological, and even surgical correction to emulate the new, algorithmically dictated standards for attention-worthy good looks.

Continue reading

How Do We Solve A Problem Like Election Prediction?

On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction? 

At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think. 

Continue reading

Why Can’t We #DeleteFacebook?: 4 Reasons We’re Reluctant

Facebook Addiction

The Cambridge Analytica scandal is still reverberating in the media, garnering almost as much daily coverage as when the story broke in The New York Times on March 17. Facebook’s mishandling of user data has catalyzed a collective public reaction of disgust and indignation, and perhaps the most prominent public manifestation of this is the #DeleteFacebook movement. This vocal campaign is urging us to do exactly what it says: To vote with our feet. To boycott. To not just deactivate our Facebook accounts, but to eliminate them entirely. Continue reading

If you aren’t paying, are your kids the product?

There’s a phrase – from where I don’t know – which says: “If you aren’t paying, you’re the product.”  Never has this felt truer than in the context of social media. Particularly Facebook, with its fan-pages and features, games and gizmos, plus never-ending updates and improvements. Who is paying for this, if not you…and what are they getting in return? The answer is actually quite straightforward.

children

Continue reading

Facebook accused of limiting, not championing, human interaction

facebook reactions

Facebook have been in a press a lot this week, and there have been a flurry of articles asking how they might be brought back from the brink. The New York Times asked a panel of experts “How to Fix Facebook?”. Some of the responses around the nature of –and limitations to– our user interactions on the social network struck me as very interesting.

Jonathan Albright, Research Director at Columbia University’s Tow Center for Digital Journalism, writes:

“The single most important step Facebook — and its subsidiary Instagram, which I view as equally important in terms of countering misinformation, hate speech and propaganda — can take is to abandon the focus on emotional signaling-as-engagement.

This is a tough proposition, of course, as billions of users have been trained to do exactly this: “react.”

What if there were a “trust emoji”? Or respect-based emojis? If a palette of six emoji-faced angry-love-sad-haha emotional buttons continues to be the way we engage with one another — and how we respond to the news — then it’s going to be an uphill battle.

Negative emotion, click bait and viral outrage are how the platform is “being used to divide.” Given this problem, Facebook needs to help us unite by building new sharing tools based on trust and respect.”

Kate Losse, an early Facebook employee and author of “The Boy Kings: A Journey into the Heart of the Social Network”, suggested:

“It would be interesting if Facebook offered a “vintage Facebook” setting that users could toggle to, without News Feed ads and “like” buttons. (Before “likes,” users wrote comments, which made interactions more unique and memorable.)

A “vintage Facebook” setting not only would be less cluttered, it would refocus the experience of using Facebook on the people using it, and their intentions for communication and interaction.”

According to recent reports, “reactions” are being algorithmically prioritized over “likes”. Why? Well, we might suppose, for the same reason most new features are developed: more and greater insight. Specifically, more insight about our specific emotions pertaining to items in our newsfeed.

Understanding the complexity of something we type in words is difficult. Systems have to understand tone, sarcasm, slang, and other nuances. Instead, “angry”, “sad”, “wow”, “haha”, and “love” make us much easier to interpret. Our truthful reactions are distilled into proxy emojis.

I see two problems with this:

  • The first is that we are misunderstood as users. Distilling all human emotions/reactions into five big nebulous ones is unhelpful. Like many of the old (and largely discredited) psychometric tests questions, these reactions allow us to cut complexity out of our own self-portrayal. This means that, down the line, the data analytics will purport to show more than they actually do. They’ll have a strange and skewed shadow of our feelings about the world. We’ll then, consequently, be fed things that “half match” our preferences and – potentially –change and adapt our preferences to match those offerings. In other words, if we’re already half-misinformed, politically naïve, prejudiced etc., we can go whole hog…
  • The second problem is that discouraging us from communicating our feelings using language, is likely to affect our ability to express ourselves using language. This is more of a worry for those growing up on the social network. If I’m not forced to articulate when I think something is wonderful, or patronizing, or cruel, and instead resort to emojis (“love” or “angry”), then the danger is that I begin to think in terms of mono-emotions. With so many young people spending hours each day on social media, this might not be as far-fetched as it sounds.

If there’s a question-mark over whether social network’s cause behavior change, then it’s fine to be unbothered about these prospects, but given Silicon Valley insiders have recently claimed that the stats are showing our minds “have been hijacked”, then perhaps it’s time to pay some heed to these mechanisms of manipulation.

Will Facebook push non-sponsored content to the margins?

facebook

Facebook are currently running trials which demote non-promoted content to a secondary feed, according to the Guardian. The experiment is being run in six countries – including Slovakia, Serbia, and Sri Lanka – and apparently follows calls from users who want to be able to see their friends’ posts more easily.  The test involves two feeds, with the primary feed exclusively featuring posts by friends alongside paid-for content.

Already smaller publishers, Facebook pages, and Buzzfeed-like sites which rely upon organic social traffic, are reporting a drop in engagement of 60-80%.

The article says:

“Notably, the change does not seem to affect paid promotions: those still appear on the news feed as normal, as do posts from people who have been followed or friended on the site. But the change does affect so called “native” content, such as Facebook videos, if those are posted by a page and not shared through paid promotion.”

Experts predict that the move will hit much of the current video content which makes it into our feeds, plus the likes of the Huffington Post and Business Insider. Quite simply, Facebook seems to want to cleanse our feeds of low value content, and encourage media outlets to pay up…

Though the social media platform states it has no plans to roll this out globally, we might reasonably assume that this trial serves some purpose. And who can blame Facebook for experimenting, given the backlash they’ve had recently over so-called “fake news”? The trouble is, here we have another example of an internet giant acting to narrow our online field of vision: if we are only served promoted content, then we are served a skewed and unrepresentative view of the world. The dollar dictates, rather than organic enthusiasm…

Additionally, though our feeds are often cluttered with fake news, mindless cat videos and other questionable content, amongst non-promoted material we also find important movements. These range from social campaigns and awareness drives, to challenging and diverse voices that diverge from mainstream opinion. Some are pernicious, but many are precious, and Facebook ought to be careful they don’t throw the baby out with the bath water.

It’s an admirable thing to respond to the wants and needs of users, and we shouldn’t be too quick to criticize Facebook here. We just need to be sure that giving clarity doesn’t mean imposing homogeneity.

Are we being made into 21st century “puppets” by our online masters?

smartphone.jpg

In a recent Guardian article, ex-Google strategist James Williams describes the persuasive, algorithmic tools of the internet giants – like Facebook’s newsfeed, Google’s search results, etc. – as the “largest, most standardized and most centralized form of attentional control in human history”. He is not alone in his concern. Increasingly, more interest is being taken in the subtle tactics that social media and other platforms use to attract and keep our attention, guide our purchasing decisions, control what we read (and when we read it), and generally manipulate our attitudes and behaviors.

The success of platforms like Facebook and Twitter has really been down to their ability to keep us coming back for more. For this, they have turned habit formation into a technological industry. Notifications, “likes”, instant play videos, messengers, Snapstreaks – these are but a few of the ways in which they lure us in and, critically, keep us there for hours at a time. According to research, on average we touch or swipe our phones 2,617 per day. In short, most of us are compulsive smartphone addicts. So much so, that whole new trends are being built around shunning phones and tablets with the hopes of improving our focus on other, arguably more important, things like physical interactions with our friends and family.

Nevertheless, such movements are unlikely to inspire an overnight U-turn when it comes to our online habits. There are whole new generations of people who have been born into this world and do not know anything other than smartphone/tablet compulsion.  This point is made beautifully by Jean-Luis Constanza, a top telecoms executive who uploaded a YouTube video of his baby daughter prodding at images in a magazine. He comments: “In the eyes of my one-year old daughter, a magazine is a broken iPad. That will remain the case throughout her entire life. Steve Jobs programmed part of her operating system.”

Consequently, the internet giants (by which I mean Facebook, Google, Twitter, Apple, Snapchat, etc.) have an enormous amount of power over what we see and read, and consequently what we buy, how we vote, and our general attitudes to people, places, and things. Concerned parties argue that these company’s current methods of subtly manipulating what they push out to us, and what they conceal from us, could equate to an abuse of their ethical responsibility. There is a power asymmetry which perhaps leads to Joe Public becoming de-humanized, as well as treated as sort of “techno-subjects” for the experimental methods of big tech.

Most of what allows these firms to know so much about us, and then to capitalize on this granular knowledge, is the constant feedback loop which supplies the metrics, which in in-turn enable the algorithms to change and adapt what we are served on the internet. This is something we willingly participate in. The feedback comprises of data about what we’ve clicked, shared, browsed, liked, favorited, or commented on it the past.  This same loop can also be used to anticipate what we might like, and to coerce us into new decisions or to react to different stimuli which – you guessed it – supplies them with even more information about “people like us”. The constant modification and refinement of our preferences, it is argued, not only creates a sort of filter bubble around us, but also stifles our autonomy in terms of limiting the options being made available to us. Our view is personalized for us based on secret assumptions that have been made about us…and, of course, commercial objectives.

Karen Yeung, of the Dickson Pool of Law at King’s College London, calls such methods of controlling what we’re exposed to digital decision guidance processes – also known by the rather jazzier title, algorithmic hypernudge. The latter pays homage to the bestselling book “Nudge” by Cass Sunstein and Richard Thaler, which talks about the ways in which subtle changes to an individual’s “choice architecture” could cause desirable behavior changes without the need for regulation. For example, putting salads at eye level in a store apparently increases the likelihood we will choose salad, but doesn’t forbid us from opting for a burger. It is a non-rational type of influence. What makes the online version of nudge more pernicious, according to Yeung, is that, a) the algorithms behind a nudge on Google or Facebook are not working towards some admirable societal goal, but rather they are programmed to optimize profits, and b) the constant feedback and refinement allows for a particularly penetrating and inescapable personalization of the behavior change mechanisms. In short, it is almost like a kind of subliminal effect, leading to deception and non-rational decision-making which, in Yeung’s words: “express contempt and disrespect for individuals as autonomous.”

So, given that our ability to walk away is getting weaker, are we still in control? Or are we being manipulated by other forces sat far away from most of us in California offices? Silicon Valley “conscience” Tristan Harris is adamant about the power imbalance here: “A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today. I don’t know a more urgent problem than this.” Harris says there “is no ethics” and vast reams of information these giants are privy to could also allow them to exploit the vulnerable.

This is a big topic with lots of work to be done, but perhaps the key to understanding whether not we are truly being manipulated is to understand in what way methods like algorithmic hypernudge undermine our reason (Williams says that they cause us to privilege impulse over reason). If we are being coerced into behaving in ways that fall short of our expectations or standards of human rationality, then it seems obvious there are follow-on ethical implications. If I do things against my will and my own better judgment – or my process of judgment is in some way compromised – it seems fair to say I am being controlled by external forces.

But perhaps that is not enough, after all, external influences have always played into our decision-making. From overt advertising, to good smelling food, to the way something (or someone!) looks. We are already accustomed to making perfectly rational decisions on the basis of non-rational influences. Just because we behave in a way that we didn’t originally plan, doesn’t mean to say that the action is itself irrational. That isn’t to say that there isn’t something going on – apparently 87% of people go to sleep and wake up with their smartphones – it is just to point out that if we’re going to use claims of psychological manipulation, we also need to be clear in where this happens and how it manifests itself. Perhaps most importantly, we need to properly identify how the consequences differ significantly from other types of unconscious persuasion.  When and how are these online influences harming us…? That’s the question.

What if Twitter could help predict a death?

I want to use this blog to look at how data and emerging technologies affect us – or more precisely YOU. As a tech ethics researcher, I’m perpetually reading articles and reports that detail the multitude of ways in which data can be used to anticipate bad societal outcomes: criminality, abuse, corruption, disease, mental health, etc etc. Some of these get oxygen, some of them don’t. Some of them have integrity, some don’t. Often these tests, analyses, and studies identify problems that gesture toward ethically “interesting” solutions.

Just today this article caught my attention. It details a Canadian study that tries to get to grips with an endemic problem: suicide in young people. Just north of the border, suicide causes no fewer than 24% of deaths amongst those aged between 15 and 24 (Canadian Mental Health Association). Clearly, this is not a trivial issue.

In response, a group of researchers have tried to determine the signs of self-harm and suicide by studying the social media posts of those in the most vulnerable age bracket. The team – from SAS Canada – have even speculated that, “these new sources could provide early indication of possible trends to guide more formal surveillance activities.” So, with the prospect of officialdom being dangled before us, it’s important to ask how this social media analysis works. In short, might any one of us land-up being surveilled as a suicide risk if we happen to make a trigger comment or two on Twitter?

Well the answer seems to be “possibly”. This work harvested 2.3 million tweets, of which 1.1 million were identified as “likely to have been authored by 13 to 17-year-olds in Canada”. This determination was made by a machine learning model that has been trained to predict age by relying on the way young people use language. So, if the algorithm thinks you tweet like a teenager, you’re potentially on the hook. From there, the team looked for where these tweets related to depression and suicide, and “picked some specific buzzwords and created topics around them, and our software mined those tweets to collect the people.”social media

Putting aside the undoubtedly harrowing idea of people collection, it’s important to highlight the usefulness of this survey. The data scientists involved insist that the data they’ve collected can help them narrow down the Canadian regions which have a problem (although one might contest that the suicide statistics themselves should reveal this), and/or identify a particular school or a time of year in which the tell-tale signs are more widespread or stronger. This in turn can help better target campaigns and resources, which – of course – is laudable, particularly if it is an improvement on existing suicide statistics. It only starts to get ethically icky once we consider what further steps might be taken.

The technicians on the project speculate as to how this data might be used in the future. Remember, we are not dealing with anonymized surveys here, but real teen voices “out in the wild”: “He (data expert Jos Polfliet) envisions the solution being used to find not only at-risk teens, but others too, like first responders and veterans who may be considering suicide.”

Eh? Find them? Does that mean it might be used to actually locate real people based on what they’ve tweeted on their personal time? As with many well-meaning data projects, everything suddenly begins to feel a little Minority Report at this point. Although this study is quite obviously well-intentioned, we are fooling ourselves if we don’t acknowledge the levels of imprecision we’re dealing with here.

Firstly, without revealing the actual identities of every account holder picked-out by the machine learning, we have no way of knowing the levels of accuracy these researchers have hit upon when it comes to monitoring 13-17 year-olds. Although the use of certain language and terminologies might be a good proxy for the age of the user, it certainly isn’t an infallible one in the wacky world of the internet.

Secondly, the same is true of suicide and depression-related buzzwords. Using a word or phrase typically associated with teen suicide is not a sufficient condition for a propensity towards suicide (indeed, it is unlikely to even be a necessary condition). As Seth Stephens-Davidowitz discussed in his new book Everybody Lies: Big Data, New Data, And What the Internet Can Tell Us About Who We Really Are, in 2014 research found that there were 6,000 Google searches for the exact “how to kill your girlfriend” and yet there were “only” 400 murders of girlfriends. In other words, not everyone who vents on the internet is in earnest, and many who are earnest in their intentions may not surface on the internet at all. So, in short, we don’t know exactly what we’ve got when we look at these tweets.

Lastly, without having read the full methodology, it appears that these suicide buzzwords were hand-picked by the team. In other words, they were selected by human beings, presumably based on what sorts of things they deemed suicidal teens might tweet. Fair enough, but not particularly scientific. In fact, this sort of process can be riddled with guesswork and human bias. How could you possibly know with any certainty, even if instructed by a physician or psychiatrist, exactly which kinds of words of phrases denote true intention and which denote teenage angst?

Hang on a second – you might protest – these buzzwords may have been chosen by a very clever, objective algorithm? Yet, even if a clever algorithm could somehow ascertain the difference between a “I hate my life” tweeted by a genuinely suicidal teen and a “I hate my life” tweeted by a tired and hormonal teenager (perhaps based on whatever language it was couched in), to make this call it would have to have been trained on data which used the tweets of teens who have either a) committed suicide or b) have been diagnosed/treated for depression. To harvest such tweets, the data would have to rely upon more than Twitter alone… all information would have to be cross-referenced with other databases (like medical records) in ways that would undoubtedly de-anonymize.

So, with no guarantees of accuracy, the prospect of physical intervention by social services or similar feels like a scary one – as is the idea of ending up on a watchlist because of a bad day at school. Particularly when we don’t know how this data would be propagated forward…

Critically, I am not trying to say that the project isn’t useful, and SAS Canada are forthcoming in their acknowledgment that ethical conversations that need to take place. Nevertheless, this feels like the usual ethical caveat which acts as a disclaimer on work that has already taken place and – one might reasonably assume – is already informing actions, policies, and future projects.

Some of the correlations this work has unveiled clearly have some value, for example, there is a 39% overlap between conversations about suicide and conversations about bullying. This is a broad trend and a helpful addition to an important narrative. Where it becomes unhelpful, however, is when it enables and/or legitimizes the external surveillance of all bullying-related conversations on social media and – to carry that thought forward – some kind of ominous, state sanctioned “follow-up” for selected individuals…