Life imitating art: China’s “Black Mirror” plans for Social Credit System

social credit system

Yesterday, both Wired and the Washington Post wrote extensively about plans the Chinese government have to use big data to track and rank their citizens. The proposed Social Credit System (SCS) is currently being piloted with a view to a full rollout in 2020. Like a real-life episode of Charlie Brooker’s dystopian Black Mirror series, the new system incentivizes social obedience whilst punishing behaviors which are not deemed becoming of a “good citizen”. Here’s the (terrifying) run down:

  • Each citizen will have a “citizen score” which will indicate their trustworthiness. This score will also be publicly ranked against the entire population, influencing prospects for jobs, loan applications, and even love.
  • Eight commercial partners are involved in the pilot, two of which are data giants with interests in social media and messaging, loans, insurance, payments, transport, and online dating.
  • Though the “complex algorithm” used to generate a score by partner Sesame Credit has not been revealed, we do know there are five factors being taken into account:
    1. Credit history
    2. Ability to fulfil contract obligations
    3. The verification of “personal characteristics” (e.g. phone number, address etc.)
    4. Behavior and preference
    5. Interpersonal relationships
  • “Behavior and preferences” considers patterns of behavior and how they reflect upon the individual. For example, someone who plays ten hours of video games each day would be considered idle, whereas someone who buys lots of diapers would be considered a responsible parent.
  • “Interpersonal relationships” allows assessors to rate interactions between friends and family. Nice messages about the government are likely to help your score, but it can also be negatively affected by things your friends post online.

Black mirror

How do incentives work?

Well, just like the “NoseDive” episode of Black Mirror, there big benefits for model citizens:

  • 600 points: Congrats! Take out a Just Spend loan of up to 5,000 yuan (for use on the scheme’s partner sites).
  • 650 points: Hurrah! You can rent out a car without placing a deposit, experience faster check-ins at hotels and even experience VIP check-in at Beijing Airport.
  • 666 points +: There’s nothing sinister about this threshold! Enjoy! You can take out a loan of up to 50,000 yuan (from a partner organization).
  • 700 points: Yowzers! You can go to Singapore without armfuls of supporting documentation.
  • 750 points: Big dog! You can be fast-tracked in applying for a pan-European Schengen visa.

What about bad citizens?

If you fall short of government expectations, you can expect to know about it. Here’s how they plan to lower your quality of life:

  • Difficulty renting cars
  • Poor employment opportunities (including being forbidden from some jobs)
  • Issues borrowing money from legitimate lenders
  • Slower internet speeds
  • Restricted access to restaurants, nightclubs and golf clubs
  • Less likely to get a date (high points profiles are more prominent on dating websites)
  • Removal of the right to travel freely abroad
  • Problems with securing rental accommodation
  • Restrictions enrolling children in certain schools

You can read more detail and commentary here, but I’ve tried to present the basics.

This system takes no excuses and makes no effort to collect feedback. If, unpreventably, your score suffers a knock, then it is simply “tough luck”. It’s not difficult to see how it will entrench disadvantage and, in all likelihood, create a delineated two-tier society.

If someone you’re connected to (perhaps a relative) reduces your score by behaving “inappropriately” online or over a messenger, this could lead to your being denied a job, which in turn will reduce your chances of gaining credit, getting a rental apartment, a partner…etc etc. It’s difficult to escape the domino effect or imagine how an individual might recover enough to live a decent life in a system where each misdemeanor seems to lead to another compounding circumstance.

We can legitimately speculate that Chinese society, from 2020, will be one in which citizens heavily police each other, disconnect themselves (in every way) from the poor/low-scoring, report indiscretions at the drop of a hat for fear of association and reprisals, and adopt phoney behaviors in order to “game” their way to full state approval. Some have described it as a form of “nudging”, but nudge techniques still leave room for choice. This seems much more coercive.

Finally, some have argued that, although the Chinese SCS system seems extreme, it actually employs techniques that are already being used by internet giants to map our own behaviors as we speak. The Chinese system simply adds a positive or negative valence to these actions and distills them into a single score. Therefore, it is worth considering which elements of SCS we find unpalatable – if any at all – and reflecting upon whether we already assent to, or participate in, similar evaluations already…

The pros and cons of “big data” lending decisions

lending.jpg

Just as borrowing options are no longer limited to the traditional bank, increasingly new types of lenders are diverging from the trusted credit score system in order to flesh out their customer profiles and assess risk in new ways. This means going beyond credit/payment relevant data and looking at additional factors that could include educational merits and certifications, employment history, which websites you visit, your location, messaging habits, and even when you go to sleep.

Undoubtedly, this is the sort of thing that strikes panic into the hearts of many of us. How much is a creditworthy amount of sleep? Which websites should I avoid? Will they hold the fact I flunked a math class against me? Nevertheless, proponents of “big data” (it’s really just data…) risk assessment claim that this approach works in favor of those who might be suffering from the effects of a low credit score.

Let’s take a look…

Pros

The fact is, credit scores don’t work for everyone and they can be difficult to improve depending upon your position. Some folks, through no fault of their own, end up getting the raw end of the deal (perhaps they’re young, a migrant, or they’ve just had a few knockbacks in life). Now given these newer models can take extra factors into account –  including how long you spend time reading contracts, considering application questions, and looking at pricing options – this additional information can add a further dimension to an application, which in turn may prompt a positive lending decision.

A recent article looked at the approach of Avant, a Chicago-based start-up lender, which uses data analytics and machine learning to “streamline borrowing for applicants whose credit scores fall below the acceptable threshold of traditional banks”. They do this by crunching an enormous 10,000+ data points to evaluate applicants. There isn’t much detail in terms of what these data points are, but doubtless they will draw upon the reams of publicly available information generated by our online and offline “emissions” – webpages browsed, where we shop, our various providers, social media profiles, friend groups, the cars we drive, our zip codes, etc etc etc. This allows the lender to spot patterns not “visible” to older systems – for example, where a potential customer has similar habits to those with high credit scores, but has a FICO score of 650 or below.

The outcome – if all goes well – is that people are judged on factors beyond their credit habits, and for some individuals this will open-up lending opportunities where they had previously received flat “nos”. Great news!

This technology is being made available to banks, or anyone who wants to lend. They may even eventually outmode credit scores, which were an attempt to model credit worthiness in a way that avoided discrimination and the unreliability of a bank manager’s intuition…

So, what are the downsides?

Cons

There are a number of valid concerns about this approach. The first of which regards what data they are taking, and what they are taking it to mean. No algorithm, however fancy, can use data points to understand all the complexities of the world. Nor can it know exactly who each applicant is as an individual. Where I went to school, where I worked, whether I’ve done time, how many children I have, what zip code I live in – they are all being used as mere proxies for certain behaviors I may or may not have. In this case they are being used as proxies for whether or not I am a credit risk.

Why is this an issue? Well, critics of this kind of e-scoring, like Cathy O’Neill, author of Weapons of Math Destruction, argue that this marks a regression back to the days of the high street bank manager. In other words, instead of being evaluated as an individual (as with a FICO score which predominantly looks your personal debt and bill paying records), you are being lumped in a bucket with “people like you”, before it is decided whether such people can be trusted to pay money back.

As O’Neill eloquently points out, the question becomes less about how you have behaved in the past, and about how people like you have behaved in the past. Though proxies can be very reliable (after all, those who live in rich areas are likely to be less of a credit risk than those who live in poor neighborhoods), the trouble with this system is that when someone is unfairly rejected based on a series of extraneous factors, there is no feedback loop to help the model self-correct. Unlike FICO, you can’t redeem yourself and improve your score. So long as the model performs to its specification and helps the lender turn a profit, it doesn’t come to know or care about the individuals who are mistakenly rejected along the way.

There is an important secondary problem with leveraging various data sources to make predictions about the future. There is no way of knowing in every case how this data was collected. By this I mean to say, there is no way of knowing whether the data itself is already infused with bias, which consequently biases the predictions of the model. Much has been made of this issue within the domain of predictive policing, whereby a neighborhood which has been over zealously policed in the past is likely to have a high number of arrest records, which tells an unthinking algorithm to over-police it in the future, and so the cycle repeats… If poor data is being used to make lending decisions, this could have the after effect of entrenching poverty, propagating discrimination, and actively work against certain populations.

Lastly (and I’m not pretending these lists of pros and cons are exhaustive), there is a problem when it comes to the so-called “chilling effect”. If I do not know how I am being surveyed and graded, this might lead me to behave in unusual and overcautious ways. You can interrogate your FICO report if you want to, but these newer scoring systems use a multitude of other unknown sources to understand you. If you continue to get rejected, this might result in you changing certain aspects of your lifestyle to win favor. Might this culminate in people moving to different zip codes? Avoiding certain – perfectly benign – websites? Etcetera, etcetera. This could lead to the unhealthy manipulation of people desperate for funds…

So, is this new way of calculating lending risk a step forward or a relapse into the bad practices of the past? Well having worked for the banking sector in years gone by, one thing still sticks in my mind when discussions turn to lending obstructions: lenders want to lend. It’s a fairly important part of their business model when it comes to making a profit (!). At face value, these newer disrupters are trying to use big data analytics to do exactly that. In a market dominated by the banks, they’re using new and dynamic ways to seek out fresh prospects who have been overlooked by the traditional model. It makes sense for everyone.

However, there is clearly the need for a cautionary note though. Although this method undoubtedly praiseworthy (and canny!) we should also remember that such tactic can breed discrimination regardless of intentions. This means that there needs to be some kind of built-in corrective feedback loop which detects mistakes and poorly reasoned rejections. Otherwise, we still have a system that continually lends to the “same type of people”, even if it broadens out who those people might be. The bank manager returns.

Having a fair and corrigible process also means that lenders need to be more open about the data metrics they are using. The world – and particularly this sector – has been on a steady track towards more transparency, not less. This is difficult for multiple reasons (which warrant another discussion entirely!) but as important as it is to protect commercial sensitivity and prevent tactics like system gaming, it is also critical that applicants can have some idea with regards to what reasonable steps they can take to improve their creditworthiness if there are factors at play beyond their credit activity.