Facebook accused of limiting, not championing, human interaction

facebook reactions

Facebook have been in a press a lot this week, and there have been a flurry of articles asking how they might be brought back from the brink. The New York Times asked a panel of experts “How to Fix Facebook?”. Some of the responses around the nature of –and limitations to– our user interactions on the social network struck me as very interesting.

Jonathan Albright, Research Director at Columbia University’s Tow Center for Digital Journalism, writes:

“The single most important step Facebook — and its subsidiary Instagram, which I view as equally important in terms of countering misinformation, hate speech and propaganda — can take is to abandon the focus on emotional signaling-as-engagement.

This is a tough proposition, of course, as billions of users have been trained to do exactly this: “react.”

What if there were a “trust emoji”? Or respect-based emojis? If a palette of six emoji-faced angry-love-sad-haha emotional buttons continues to be the way we engage with one another — and how we respond to the news — then it’s going to be an uphill battle.

Negative emotion, click bait and viral outrage are how the platform is “being used to divide.” Given this problem, Facebook needs to help us unite by building new sharing tools based on trust and respect.”

Kate Losse, an early Facebook employee and author of “The Boy Kings: A Journey into the Heart of the Social Network”, suggested:

“It would be interesting if Facebook offered a “vintage Facebook” setting that users could toggle to, without News Feed ads and “like” buttons. (Before “likes,” users wrote comments, which made interactions more unique and memorable.)

A “vintage Facebook” setting not only would be less cluttered, it would refocus the experience of using Facebook on the people using it, and their intentions for communication and interaction.”

According to recent reports, “reactions” are being algorithmically prioritized over “likes”. Why? Well, we might suppose, for the same reason most new features are developed: more and greater insight. Specifically, more insight about our specific emotions pertaining to items in our newsfeed.

Understanding the complexity of something we type in words is difficult. Systems have to understand tone, sarcasm, slang, and other nuances. Instead, “angry”, “sad”, “wow”, “haha”, and “love” make us much easier to interpret. Our truthful reactions are distilled into proxy emojis.

I see two problems with this:

  • The first is that we are misunderstood as users. Distilling all human emotions/reactions into five big nebulous ones is unhelpful. Like many of the old (and largely discredited) psychometric tests questions, these reactions allow us to cut complexity out of our own self-portrayal. This means that, down the line, the data analytics will purport to show more than they actually do. They’ll have a strange and skewed shadow of our feelings about the world. We’ll then, consequently, be fed things that “half match” our preferences and – potentially –change and adapt our preferences to match those offerings. In other words, if we’re already half-misinformed, politically naïve, prejudiced etc., we can go whole hog…
  • The second problem is that discouraging us from communicating our feelings using language, is likely to affect our ability to express ourselves using language. This is more of a worry for those growing up on the social network. If I’m not forced to articulate when I think something is wonderful, or patronizing, or cruel, and instead resort to emojis (“love” or “angry”), then the danger is that I begin to think in terms of mono-emotions. With so many young people spending hours each day on social media, this might not be as far-fetched as it sounds.

If there’s a question-mark over whether social network’s cause behavior change, then it’s fine to be unbothered about these prospects, but given Silicon Valley insiders have recently claimed that the stats are showing our minds “have been hijacked”, then perhaps it’s time to pay some heed to these mechanisms of manipulation.

Will robots make us more robotic?

robot dalek

Anyone who has taken public transport in San Francisco will tell you: it is not strange and unusual to encounter the strange and unusual. Every day is one of eyebrow-raising discovery. That said, I surprised myself recently when I became slightly transfixed –and perhaps a little perplexed– listening to someone narrate a text message into their smartphone.

The expressionless and toneless girl carefully articulated each word: “I can’t believe she told you”, she said aloud like a Dalek, “LOL”. How odd it seemed to see someone sat, stoney-faced, proclaiming that they were “laughing out loud” when nothing could be further from the truth.

Now, I have a limited background in acting, and I worked in-and-around politics for several years, so believe me when I say I’ve heard people speak robotically without any conviction or inflection before. But those people were reading scripts, or trying to remember their lines-to-take, or trotting out meaningless straplines. They weren’t expressing their own thoughts and feelings to a friend.

Then yesterday, I stumbled across this blog about the evolution of interactions in the age of AI. In a rather sweet anecdote, the author talks about ordering his Alexa to “turn off the lights” and his young son questioning his manners and the absence of “please”. He goes on to ponder the future and how we might incorporate manners and niceties when instructing our digital assistants, lest we inhibit their range by limiting the vocabulary we use with them.

My thoughts went elsewhere. Though AI is developing to understand our expressions and feelings, it feels like we also have some evolving to do before we become used to addressing artificial systems as we would other humans. Moreover, with voice instructions and narrated text, there seems little need for sincerity or emotion. The text itself is either directly indicative – or free – of sentiment.

Where I’m getting to is this: might we humans begin to develop specific a type of robotic tone exclusively for non-social instructive language? For Alexa and co.? We already have a tone and style we reserve for babies, pets and (sometimes) older relatives. Will a whole new style of monosyllabic speech emerge for the purposes of closing garage doors, sending text messages, ordering plane tickets and browsing TV channels? A sort of anti-baby talk?

It’s fun to speculate about these things, and I’m certainly no linguist, but it’s difficult to see that the voice-activated future we’re promised wouldn’t have some implications for modes of speech. Will we flatten our language, or to the contrary, become hyper-expressive? We’re yet to find out, but we can only hope that the beautiful languages of the world aren’t somehow roboticized as we adapt to hands-free technologies and AI assistants.

Will Facebook push non-sponsored content to the margins?

facebook

Facebook are currently running trials which demote non-promoted content to a secondary feed, according to the Guardian. The experiment is being run in six countries – including Slovakia, Serbia, and Sri Lanka – and apparently follows calls from users who want to be able to see their friends’ posts more easily.  The test involves two feeds, with the primary feed exclusively featuring posts by friends alongside paid-for content.

Already smaller publishers, Facebook pages, and Buzzfeed-like sites which rely upon organic social traffic, are reporting a drop in engagement of 60-80%.

The article says:

“Notably, the change does not seem to affect paid promotions: those still appear on the news feed as normal, as do posts from people who have been followed or friended on the site. But the change does affect so called “native” content, such as Facebook videos, if those are posted by a page and not shared through paid promotion.”

Experts predict that the move will hit much of the current video content which makes it into our feeds, plus the likes of the Huffington Post and Business Insider. Quite simply, Facebook seems to want to cleanse our feeds of low value content, and encourage media outlets to pay up…

Though the social media platform states it has no plans to roll this out globally, we might reasonably assume that this trial serves some purpose. And who can blame Facebook for experimenting, given the backlash they’ve had recently over so-called “fake news”? The trouble is, here we have another example of an internet giant acting to narrow our online field of vision: if we are only served promoted content, then we are served a skewed and unrepresentative view of the world. The dollar dictates, rather than organic enthusiasm…

Additionally, though our feeds are often cluttered with fake news, mindless cat videos and other questionable content, amongst non-promoted material we also find important movements. These range from social campaigns and awareness drives, to challenging and diverse voices that diverge from mainstream opinion. Some are pernicious, but many are precious, and Facebook ought to be careful they don’t throw the baby out with the bath water.

It’s an admirable thing to respond to the wants and needs of users, and we shouldn’t be too quick to criticize Facebook here. We just need to be sure that giving clarity doesn’t mean imposing homogeneity.

Life imitating art: China’s “Black Mirror” plans for Social Credit System

social credit system

Yesterday, both Wired and the Washington Post wrote extensively about plans the Chinese government have to use big data to track and rank their citizens. The proposed Social Credit System (SCS) is currently being piloted with a view to a full rollout in 2020. Like a real-life episode of Charlie Brooker’s dystopian Black Mirror series, the new system incentivizes social obedience whilst punishing behaviors which are not deemed becoming of a “good citizen”. Here’s the (terrifying) run down:

  • Each citizen will have a “citizen score” which will indicate their trustworthiness. This score will also be publicly ranked against the entire population, influencing prospects for jobs, loan applications, and even love.
  • Eight commercial partners are involved in the pilot, two of which are data giants with interests in social media and messaging, loans, insurance, payments, transport, and online dating.
  • Though the “complex algorithm” used to generate a score by partner Sesame Credit has not been revealed, we do know there are five factors being taken into account:
    1. Credit history
    2. Ability to fulfil contract obligations
    3. The verification of “personal characteristics” (e.g. phone number, address etc.)
    4. Behavior and preference
    5. Interpersonal relationships
  • “Behavior and preferences” considers patterns of behavior and how they reflect upon the individual. For example, someone who plays ten hours of video games each day would be considered idle, whereas someone who buys lots of diapers would be considered a responsible parent.
  • “Interpersonal relationships” allows assessors to rate interactions between friends and family. Nice messages about the government are likely to help your score, but it can also be negatively affected by things your friends post online.

Black mirror

How do incentives work?

Well, just like the “NoseDive” episode of Black Mirror, there big benefits for model citizens:

  • 600 points: Congrats! Take out a Just Spend loan of up to 5,000 yuan (for use on the scheme’s partner sites).
  • 650 points: Hurrah! You can rent out a car without placing a deposit, experience faster check-ins at hotels and even experience VIP check-in at Beijing Airport.
  • 666 points +: There’s nothing sinister about this threshold! Enjoy! You can take out a loan of up to 50,000 yuan (from a partner organization).
  • 700 points: Yowzers! You can go to Singapore without armfuls of supporting documentation.
  • 750 points: Big dog! You can be fast-tracked in applying for a pan-European Schengen visa.

What about bad citizens?

If you fall short of government expectations, you can expect to know about it. Here’s how they plan to lower your quality of life:

  • Difficulty renting cars
  • Poor employment opportunities (including being forbidden from some jobs)
  • Issues borrowing money from legitimate lenders
  • Slower internet speeds
  • Restricted access to restaurants, nightclubs and golf clubs
  • Less likely to get a date (high points profiles are more prominent on dating websites)
  • Removal of the right to travel freely abroad
  • Problems with securing rental accommodation
  • Restrictions enrolling children in certain schools

You can read more detail and commentary here, but I’ve tried to present the basics.

This system takes no excuses and makes no effort to collect feedback. If, unpreventably, your score suffers a knock, then it is simply “tough luck”. It’s not difficult to see how it will entrench disadvantage and, in all likelihood, create a delineated two-tier society.

If someone you’re connected to (perhaps a relative) reduces your score by behaving “inappropriately” online or over a messenger, this could lead to your being denied a job, which in turn will reduce your chances of gaining credit, getting a rental apartment, a partner…etc etc. It’s difficult to escape the domino effect or imagine how an individual might recover enough to live a decent life in a system where each misdemeanor seems to lead to another compounding circumstance.

We can legitimately speculate that Chinese society, from 2020, will be one in which citizens heavily police each other, disconnect themselves (in every way) from the poor/low-scoring, report indiscretions at the drop of a hat for fear of association and reprisals, and adopt phoney behaviors in order to “game” their way to full state approval. Some have described it as a form of “nudging”, but nudge techniques still leave room for choice. This seems much more coercive.

Finally, some have argued that, although the Chinese SCS system seems extreme, it actually employs techniques that are already being used by internet giants to map our own behaviors as we speak. The Chinese system simply adds a positive or negative valence to these actions and distills them into a single score. Therefore, it is worth considering which elements of SCS we find unpalatable – if any at all – and reflecting upon whether we already assent to, or participate in, similar evaluations already…

Google search figures reveal interest in tech surged by 78% in last 12months

tech interest

I conducted some desk research today which I hoped would either reinforce or eliminate my hunch that general interest in all-things-tech is growing. Anyone who has read Daniel Kahneman’s fantastic book on the role of intuition in such judgments, will know that the only way that I can (possibly!) get away from making bold claims like “the general population are becoming more curious about the mechanisms of tech” is by somehow providing a statistical proof.

Enter Google Trends, and some rough – yet hopefully revealing – investigating on my part. Here’s where I got to:

Google trends

  • The popularity of the search term “what is Big Data” has increased by 54%
  • The popularity of the search term “what is an algorithm” has increased by 56%
  • The popularity of the search term “what is artificial intelligence” has increased by 83%
  • The popularity of the search term “what is machine learning” has increased by 132% (!)
  • The overall popularity of these tech-related search terms increased by 78%.

All the figures are over a 12-month period (Oct 2016 – Oct 2017), and my increases are based on Google Trends’ “interest over time measure” which assigns a value relative to peak popularity. It is also interesting to look at these same search terms over a longer period (see 3-year and 5-year charts) where the same trajectory can also be seen quite neatly.

My methodology, of course, is unashamedly unsophisticated. The list of terms I have used is certainly not exhaustive, and I’m aware that words like “algorithm” are not exclusive to the tech lexicon. I figured that the “what is” prefix would generally denote a novice search, and I would probably defend this as an as-good-as-anything-else, finger-in-the-air way to gauge if searches are from new, enquiring minds. Nevertheless, as discussed, my objective was to find some indication that my original intuition was correct. This is not intended to reflect a rigorous and conclusive study…

So, what does it mean? Well, it seems to confirm something that we all think we already know. Namely, that tech is migrating from the nerdy peripheries to center stage. And if we can reasonably assume these searches imply a quest for knowledge, then we might use this to speculate about a future where tech knowledge is decentralized, and better diffused throughout broader society.

Instinctively, this feels like a good thing. So many ethical discussions about tech focus on worries about privacy, manipulation, and the imbalance of power. When we talk about tech in society, the conversation can often turn to doomsday scenarios. But the upward lines on these charts might tell us something different. An interested and informed general population might help mitigate against ill effects in the next few years.

Furthermore, it’s easy to forget the many, many good things that are happening in tech which – with an increasingly engaged population – could truly benefit the whole of society. Just casually browsing the news this week, two very different stories caught my eye. The first was about 28-year old James Green, who believes his life was saved by his Apple Watch, which alerted him to a sudden and extreme increase in his heart rate and prompted him to seek urgent help for what turned out to be a deadly pulmonary embolism. The second was about Pinchas Gutter, a holocaust survivor and participant in the New Dimensions in Testimony project, which helps keep history alive by allowing (in this case) visitors to the Museum of Jewish Heritage in Toronto to interact with an image of Mr. Gutter, and ask questions about his experiences during the Nazi occupation of Poland.

In both cases – but in very different ways – technology, algorithms, data, and machine learning are being employed to save us. To increase our awareness in ways that (I believe) can only be described as positive.  The more we all understand about how these technologies work, the more likely it is that they will survive, thrive and new, similar ideas will evolve to the advantage of all of us.

This is, obviously, an optimistic view. But I think when we’re talking about ethics it can be important not to artificially suppress a well-founded glimmer of hope where it occurs. Only time how will tell how it will all play out.

Why would I want a VR headset?

vr

Earlier in the week I tweeted this:

“Genuine question: for what reason might an ordinary/modern household want a #VR headset? Assuming they don’t play video games or similar…?”

Now, I don’t have a very large band of followers (I’ve only recently started using Twitter with any purpose), but nevertheless I was shocked that the tweet was met with a stony silence (bar one “like”!). Even when I retweeted with a spectrum of related hashtags, I got nothing…

Perhaps it’s just that the limited number of bots that constitute my twitter following don’t have much to say on the matter, but I can’t help thinking that this isn’t the easiest question to answer. Virtual Reality still hasn’t taken off, and I’m reluctant to agree with “experts” and Zuckerberg on the sticking points: price and portability.

Though I agree that the price can be extortionate when the headset is coupled with appropriate hardware, it is still the case that lots of things are expensive. If the rise of technology – and indeed, consumerism – has shown us anything, it’s that people will pay top dollar for desirable items, no matter how frivolous.

It’s a similar story when it comes to portability. VR sets certainly look cumbersome, and somewhat 80s, but without being a VR expert I feel sure that this becomes trivial if the experience is suitably immersive and fascinating. I must admit, I’m also slightly confused as to why I might need to transport it with enough regularity that portability becomes an issue. The idea of taking something with me which, for it’s very purpose, is designed to carry me off elsewhere is quite a strange one.

A Gartner expert has thrown in the suggestion that wiring might putting potential buyers off, saying:  “I can only imagine what that would be like for my retired parents. Someone is going to break a hip for sure.”

What a strange image. Retired parents? What would they be doing on their VR machine? This is the question to which I cannot find the answer…

These thoughts had been percolating for a little while when, on a recent flight from Boston to SF, I came across this video from LinkedIn Learning named “Virtual Reality Foundations”. It was the single most boring video I have ever watched, narrated by a man who can neither move his facial muscles nor shift his vocal tone, but I persevered (through most of it…) in order to see what wondrous potential uses I had overlooked.

I was largely underwhelmed. Though the technology is undoubtedly impressive, the arguments for its incorporation into various business practices felt weak. It’s as though the focus has been squarely on its creation, rather than its use. The most convincing application I’ve heard about is its use in medical care, which is excellent but isn’t likely to spike mass market sales.

We’re told the future will be full of VR, but it has already been noted that AR is snapping at its heels. It’s not difficult to see why. AR games like Pokémon GO and applications like Blippar can be social, interactive…collaborative even. However, when I stick a VR kit over most of my critical senses, I am cut off from anything other than virtual reality. VR is socially isolating.

Therefore, I continue to push the question: “why would I want a VR headset?”, and I’m actually asking for more than the selling points of a product. Even if VR was somewhat helpful with my work as a journalist, or a marked improvement on my experiences as a gamer, is this worth the solitude it imposes? And for that matter, the vulnerability that comes hand-in-hand with being cut-off from the real world?

We are – perhaps laudably, perhaps accidentally – resisting totally immersive experiences at the moment. We are refusing to walk into entire worlds constructed by engineers working for Google or Facebook. This must be causing much frustration in their respective camps…

Though the internet is – as we can all vouch – an extremely beguiling arena, it’s 2-dimensionality at least allows us to keep our toes anchored in the sobering waters of real reality. This means that companies whose models are built on advertising (like Facebook) have to compete with our surroundings and their distractions. If they can coax us into a world which cuts away other sensory disturbances – as with virtual reality – then presumably their adverts and product sales will have to compete less, and will win our wallets over more often.

It will be interesting to see if we, as a society, relent in the way tech companies hope. It is difficult to envisage it happening in the near future. Why do I need a VR headset? What for? Until the answer to that question is tantalizing enough to eclipse the isolation and vulnerability aspects of VR, we will (hopefully) remain in the real world, with all it’s helpful perspective.

Five concerns about government biometric databases and facial recognition

face recognition

Last Thursday, the Australian government announced its existing “Face Verification Service” would be expanded to include personal images from every Australian driver’s license and photo ID, as well as from every passport and visa. This database will then be used to train facial recognition technology so that law enforcers can identify people within seconds, wherever they may be – on the street, in shopping malls, car parks, train stations, airports, schools, and just about anywhere that surveillance cameras pop-up…

Deep learning techniques will allow the algorithm to adapt to new information, meaning that it will have the ability to identify a face obscured by bad lighting or bad angles…and even one that has aged over several years.

This level of penetrative surveillance is obviously unprecedented, and is being heavily criticized by the country’s civil rights activists and law professors who say that Australia’s “patchwork” privacy laws have allowed successive governments to erode citizens’ rights. Nevertheless, politicians argue that personal information abounds on the internet regardless, and that it is more important that measures are taken to deter and ensnare potential terrorists.

However worthy the objective, it is obviously important to challenge such measures by trying to understand their immediate and long-term implications. Here are five glaring concerns that governments mounting similar initiatives should undoubtedly address:

  1. Hacking and security breaches

The more comprehensive a database of information is, the more attractive it becomes to hackers. No doubt the Australian government will hire top security experts as part of this project, but the methods of those intent on breaching security parameters are forever evolving, and it is no joke trying to mount a defense. Back in 2014 the US Office of Personnel Management (OPM) compromised the personal information of 22 million current and former employees due to a Chinese hack, which was one of the biggest in history. Then FBI Director James Comey said that the information included, “every place I’ve ever lived since I was 18, every foreign travel I’ve ever taken, all of my family, their addresses.”

  1. Ineffective unless coverage is total

Using surveillance, citizen data and/or national ID cards to track and monitor people in the hopes of preventing terrorist attacks (the stated intention of the Aussie government) really requires total coverage, i.e. monitoring everyone all of the time. We know this because many states with mass (but not total) surveillance programs – like the US – have still been subject to national security breaches, like the Boston Marathon bombing. Security experts are clear that targeted, rather than broad surveillance, is generally the best way to find those planning an attack, as most subjects are already on the radar of intelligence services. Perhaps Australia’s new approach aspires to some ideal notion of total coverage, but if it isn’t successful at achieving this, there’s a chance that malicious parties could evade detection by a scheme that focuses its attentions on registered citizens.

  1. Chilling effect

Following that last thought through, in the eyes of some, there is a substantial harm inflicted by this biometrically-based surveillance project: it treats all citizens and visitors as potential suspects. This may seem like a rather intangible consequence, but that isn’t necessarily the case. Implementing a facial recognition scheme could, in fact, have a substantial chilling effect. This means that law-abiding citizens may be discouraged from participating in legitimate public acts – for example, protesting the current government administration – for fear of legal repercussions down-the-line. Indeed, there are countless things we may hesitate to do if we have new concerns about instant identifiability…

  1. Mission creep

Though current governments may give their reassurances about the respectful and considered use of this data, who is to say what future administrations may wish to use it for? Might their mission creep beyond national security, and deteriorate to the point at which law enforcement use facial recognition at will to detain and prosecute individuals for very minor offenses? Might our “personal file” be updated with our known movements so that intelligence services have a comprehensive history of where we’ve been and when? Additionally, might the images used to train and update algorithms start to come from non-official sources like personal social media accounts and other platforms? Undoubtedly, it is already easy to build-up a comprehensive file on an individual using publically available data, but many would argue that governments should require a rationale – or even permission – for doing so.

  1. False positives

As all data scientists know, algorithms working with massive datasets are likely to produce false positives, i.e. such a system as proposed may implicate perfectly innocent people for crimes they didn’t commit. This has also been identified as a problem with DNA databases. The sheer number of comparisons that have to be run when, for instance, a new threat is identified, dramatically raises the possibility that some of the identifications will be in error. These odds increase if, in the cases of both DNA and facial recognition, two individuals are related. As rights campaigners point out, not only is this potentially harrowing for the individuals concerned, it also presents a harmful distraction for law enforcement and security services who might prioritize seemingly “infallible” technological insight over other useful, but contradictory leads.

Though apparently most Australians “don’t care” about the launch of this new scheme, it is morally dangerous for governments to take general apathy as a green light for action. Not caring can be a “stand-in” for all sorts of things, and of course most people are busy leading their lives. Where individual citizens may not be concerned to thrash out the real implications of an initiative, politicians and their advisors have an absolute responsibility to do so – even where the reasoning they offer is of little-to-no interest to the general population.

Online dating’s hints of Stoicism

couple

Yesterday, I examined why some believe that data and the internet are conspiring to limit both our attention, and the fields of our knowledge/interest. Today I’m presenting something entirely different, namely the results of a forthcoming report which demonstrate how the phenomenon of online dating is actively altering the fabric of society by expanding our worlds.

An overview of the paper is available here, but in a nutshell, researchers from the University of Essex and the University of Vienna have been studying the social connections between us all, and have revealed how so many of us meeting (and mating with!) complete strangers through online dating is having the effect of broadening out our whole society.

Economists Josue Ortega and Philipp Hergovich argue that, whereas just a couple of decades ago most new people arriving into our social circle (e.g. a new partner) were just a couple of connections away from us to begin with (i.e. someone you meet through existing friends, or that lives in your local community), now our digital “matchings” with random folk from the internet mean that for many of us, our social reach extends much further than it ever would have done – i.e. into completely separate communities.

Looking at the bigger picture, this means that our little clusters of friends/family/neighbors no longer exist in relative isolation because: “as far as networks go, this [dating strangers] is like building new highways between towns…just a few random new paths between different node villages can completely change how the network functions.” This bridging between communities is perhaps most vivid when considering the growing numbers of interracial couples. Indeed, the report’s authors claim that their model predicts almost complete racial integration post the emergence of online dating.

This put me in mind of the concentric circles of Stoic philosophy (further popularized by the modern philosopher Professor Martha Nussbaum). This simple image has existed for centuries and has been described by Nussbaum as a “reminder of the interdependence of all human beings and communities.” It is supposed to encapsulate some of the ancient ideas of belonging and cosmopolitanism, and is similar to the expanding circles of moral concern explained by Philosopher Peter Singer:

hierocles-concentric-circles

As its inventor, Hierocles, imagined it, the most external circles should be pulled in as strangers are treated as friends, and friends as relatives. This happens as we increase our own efforts to recognize the habits, cultures, aims and aspirations of others and consider them akin to – and even constitutive of – our own.

In many respects, the evolution of the internet (as well as other media) has built upon the foundations of global travel to help us realize Hierocles’ rudimentary diagram. Though we still have strong ideas about personal, familial and community identity, the broadening out of our non-virtual social network – as exemplified by this work on online dating – means that our connections and concerns are not limited to the smaller, inner circles any longer. We increasingly draw those from the furthermost circles inward. As Singer argues, this must also mean that our ethical/moral concern emanates outward beyond our immediate vicinities.

Yet, not only can the internet (and in this case, data matching) bring those outer circles in, but in some ways it also seems to enable the distribution of “the self” and – more pertinently – a community…

I remember back in 2012, when I was working in PR and public affairs, there was a lot of talk about current “trends”. One of the ones that has stuck with me was nicknamed something like “patchwork people”. It referred, I think rather observantly, to the notion that so many of us feel better defined by the virtual/global communities we inhabit (perhaps communities based around hobbies or research or careers or fandom) than we do our immediate physical communities, within which we might rarely interact.

Whether the internet is allowing us to draw others into our understanding of the world, or whether we feel that our understanding of the world is mainly constituted by connections to others outside of the “natural” inner circles, there seems to be no doubt that the natural order of priority is evolving, and it will be fascinating to see how and if it continues to progress.

Are we being made into 21st century “puppets” by our online masters?

smartphone.jpg

In a recent Guardian article, ex-Google strategist James Williams describes the persuasive, algorithmic tools of the internet giants – like Facebook’s newsfeed, Google’s search results, etc. – as the “largest, most standardized and most centralized form of attentional control in human history”. He is not alone in his concern. Increasingly, more interest is being taken in the subtle tactics that social media and other platforms use to attract and keep our attention, guide our purchasing decisions, control what we read (and when we read it), and generally manipulate our attitudes and behaviors.

The success of platforms like Facebook and Twitter has really been down to their ability to keep us coming back for more. For this, they have turned habit formation into a technological industry. Notifications, “likes”, instant play videos, messengers, Snapstreaks – these are but a few of the ways in which they lure us in and, critically, keep us there for hours at a time. According to research, on average we touch or swipe our phones 2,617 per day. In short, most of us are compulsive smartphone addicts. So much so, that whole new trends are being built around shunning phones and tablets with the hopes of improving our focus on other, arguably more important, things like physical interactions with our friends and family.

Nevertheless, such movements are unlikely to inspire an overnight U-turn when it comes to our online habits. There are whole new generations of people who have been born into this world and do not know anything other than smartphone/tablet compulsion.  This point is made beautifully by Jean-Luis Constanza, a top telecoms executive who uploaded a YouTube video of his baby daughter prodding at images in a magazine. He comments: “In the eyes of my one-year old daughter, a magazine is a broken iPad. That will remain the case throughout her entire life. Steve Jobs programmed part of her operating system.”

Consequently, the internet giants (by which I mean Facebook, Google, Twitter, Apple, Snapchat, etc.) have an enormous amount of power over what we see and read, and consequently what we buy, how we vote, and our general attitudes to people, places, and things. Concerned parties argue that these company’s current methods of subtly manipulating what they push out to us, and what they conceal from us, could equate to an abuse of their ethical responsibility. There is a power asymmetry which perhaps leads to Joe Public becoming de-humanized, as well as treated as sort of “techno-subjects” for the experimental methods of big tech.

Most of what allows these firms to know so much about us, and then to capitalize on this granular knowledge, is the constant feedback loop which supplies the metrics, which in in-turn enable the algorithms to change and adapt what we are served on the internet. This is something we willingly participate in. The feedback comprises of data about what we’ve clicked, shared, browsed, liked, favorited, or commented on it the past.  This same loop can also be used to anticipate what we might like, and to coerce us into new decisions or to react to different stimuli which – you guessed it – supplies them with even more information about “people like us”. The constant modification and refinement of our preferences, it is argued, not only creates a sort of filter bubble around us, but also stifles our autonomy in terms of limiting the options being made available to us. Our view is personalized for us based on secret assumptions that have been made about us…and, of course, commercial objectives.

Karen Yeung, of the Dickson Pool of Law at King’s College London, calls such methods of controlling what we’re exposed to digital decision guidance processes – also known by the rather jazzier title, algorithmic hypernudge. The latter pays homage to the bestselling book “Nudge” by Cass Sunstein and Richard Thaler, which talks about the ways in which subtle changes to an individual’s “choice architecture” could cause desirable behavior changes without the need for regulation. For example, putting salads at eye level in a store apparently increases the likelihood we will choose salad, but doesn’t forbid us from opting for a burger. It is a non-rational type of influence. What makes the online version of nudge more pernicious, according to Yeung, is that, a) the algorithms behind a nudge on Google or Facebook are not working towards some admirable societal goal, but rather they are programmed to optimize profits, and b) the constant feedback and refinement allows for a particularly penetrating and inescapable personalization of the behavior change mechanisms. In short, it is almost like a kind of subliminal effect, leading to deception and non-rational decision-making which, in Yeung’s words: “express contempt and disrespect for individuals as autonomous.”

So, given that our ability to walk away is getting weaker, are we still in control? Or are we being manipulated by other forces sat far away from most of us in California offices? Silicon Valley “conscience” Tristan Harris is adamant about the power imbalance here: “A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today. I don’t know a more urgent problem than this.” Harris says there “is no ethics” and vast reams of information these giants are privy to could also allow them to exploit the vulnerable.

This is a big topic with lots of work to be done, but perhaps the key to understanding whether not we are truly being manipulated is to understand in what way methods like algorithmic hypernudge undermine our reason (Williams says that they cause us to privilege impulse over reason). If we are being coerced into behaving in ways that fall short of our expectations or standards of human rationality, then it seems obvious there are follow-on ethical implications. If I do things against my will and my own better judgment – or my process of judgment is in some way compromised – it seems fair to say I am being controlled by external forces.

But perhaps that is not enough, after all, external influences have always played into our decision-making. From overt advertising, to good smelling food, to the way something (or someone!) looks. We are already accustomed to making perfectly rational decisions on the basis of non-rational influences. Just because we behave in a way that we didn’t originally plan, doesn’t mean to say that the action is itself irrational. That isn’t to say that there isn’t something going on – apparently 87% of people go to sleep and wake up with their smartphones – it is just to point out that if we’re going to use claims of psychological manipulation, we also need to be clear in where this happens and how it manifests itself. Perhaps most importantly, we need to properly identify how the consequences differ significantly from other types of unconscious persuasion.  When and how are these online influences harming us…? That’s the question.

The pros and cons of “big data” lending decisions

lending.jpg

Just as borrowing options are no longer limited to the traditional bank, increasingly new types of lenders are diverging from the trusted credit score system in order to flesh out their customer profiles and assess risk in new ways. This means going beyond credit/payment relevant data and looking at additional factors that could include educational merits and certifications, employment history, which websites you visit, your location, messaging habits, and even when you go to sleep.

Undoubtedly, this is the sort of thing that strikes panic into the hearts of many of us. How much is a creditworthy amount of sleep? Which websites should I avoid? Will they hold the fact I flunked a math class against me? Nevertheless, proponents of “big data” (it’s really just data…) risk assessment claim that this approach works in favor of those who might be suffering from the effects of a low credit score.

Let’s take a look…

Pros

The fact is, credit scores don’t work for everyone and they can be difficult to improve depending upon your position. Some folks, through no fault of their own, end up getting the raw end of the deal (perhaps they’re young, a migrant, or they’ve just had a few knockbacks in life). Now given these newer models can take extra factors into account –  including how long you spend time reading contracts, considering application questions, and looking at pricing options – this additional information can add a further dimension to an application, which in turn may prompt a positive lending decision.

A recent article looked at the approach of Avant, a Chicago-based start-up lender, which uses data analytics and machine learning to “streamline borrowing for applicants whose credit scores fall below the acceptable threshold of traditional banks”. They do this by crunching an enormous 10,000+ data points to evaluate applicants. There isn’t much detail in terms of what these data points are, but doubtless they will draw upon the reams of publicly available information generated by our online and offline “emissions” – webpages browsed, where we shop, our various providers, social media profiles, friend groups, the cars we drive, our zip codes, etc etc etc. This allows the lender to spot patterns not “visible” to older systems – for example, where a potential customer has similar habits to those with high credit scores, but has a FICO score of 650 or below.

The outcome – if all goes well – is that people are judged on factors beyond their credit habits, and for some individuals this will open-up lending opportunities where they had previously received flat “nos”. Great news!

This technology is being made available to banks, or anyone who wants to lend. They may even eventually outmode credit scores, which were an attempt to model credit worthiness in a way that avoided discrimination and the unreliability of a bank manager’s intuition…

So, what are the downsides?

Cons

There are a number of valid concerns about this approach. The first of which regards what data they are taking, and what they are taking it to mean. No algorithm, however fancy, can use data points to understand all the complexities of the world. Nor can it know exactly who each applicant is as an individual. Where I went to school, where I worked, whether I’ve done time, how many children I have, what zip code I live in – they are all being used as mere proxies for certain behaviors I may or may not have. In this case they are being used as proxies for whether or not I am a credit risk.

Why is this an issue? Well, critics of this kind of e-scoring, like Cathy O’Neill, author of Weapons of Math Destruction, argue that this marks a regression back to the days of the high street bank manager. In other words, instead of being evaluated as an individual (as with a FICO score which predominantly looks your personal debt and bill paying records), you are being lumped in a bucket with “people like you”, before it is decided whether such people can be trusted to pay money back.

As O’Neill eloquently points out, the question becomes less about how you have behaved in the past, and about how people like you have behaved in the past. Though proxies can be very reliable (after all, those who live in rich areas are likely to be less of a credit risk than those who live in poor neighborhoods), the trouble with this system is that when someone is unfairly rejected based on a series of extraneous factors, there is no feedback loop to help the model self-correct. Unlike FICO, you can’t redeem yourself and improve your score. So long as the model performs to its specification and helps the lender turn a profit, it doesn’t come to know or care about the individuals who are mistakenly rejected along the way.

There is an important secondary problem with leveraging various data sources to make predictions about the future. There is no way of knowing in every case how this data was collected. By this I mean to say, there is no way of knowing whether the data itself is already infused with bias, which consequently biases the predictions of the model. Much has been made of this issue within the domain of predictive policing, whereby a neighborhood which has been over zealously policed in the past is likely to have a high number of arrest records, which tells an unthinking algorithm to over-police it in the future, and so the cycle repeats… If poor data is being used to make lending decisions, this could have the after effect of entrenching poverty, propagating discrimination, and actively work against certain populations.

Lastly (and I’m not pretending these lists of pros and cons are exhaustive), there is a problem when it comes to the so-called “chilling effect”. If I do not know how I am being surveyed and graded, this might lead me to behave in unusual and overcautious ways. You can interrogate your FICO report if you want to, but these newer scoring systems use a multitude of other unknown sources to understand you. If you continue to get rejected, this might result in you changing certain aspects of your lifestyle to win favor. Might this culminate in people moving to different zip codes? Avoiding certain – perfectly benign – websites? Etcetera, etcetera. This could lead to the unhealthy manipulation of people desperate for funds…

So, is this new way of calculating lending risk a step forward or a relapse into the bad practices of the past? Well having worked for the banking sector in years gone by, one thing still sticks in my mind when discussions turn to lending obstructions: lenders want to lend. It’s a fairly important part of their business model when it comes to making a profit (!). At face value, these newer disrupters are trying to use big data analytics to do exactly that. In a market dominated by the banks, they’re using new and dynamic ways to seek out fresh prospects who have been overlooked by the traditional model. It makes sense for everyone.

However, there is clearly the need for a cautionary note though. Although this method undoubtedly praiseworthy (and canny!) we should also remember that such tactic can breed discrimination regardless of intentions. This means that there needs to be some kind of built-in corrective feedback loop which detects mistakes and poorly reasoned rejections. Otherwise, we still have a system that continually lends to the “same type of people”, even if it broadens out who those people might be. The bank manager returns.

Having a fair and corrigible process also means that lenders need to be more open about the data metrics they are using. The world – and particularly this sector – has been on a steady track towards more transparency, not less. This is difficult for multiple reasons (which warrant another discussion entirely!) but as important as it is to protect commercial sensitivity and prevent tactics like system gaming, it is also critical that applicants can have some idea with regards to what reasonable steps they can take to improve their creditworthiness if there are factors at play beyond their credit activity.