AI, Showbiz, and Cause for Concern (x2)

Screen Shot 2019-06-05 at 4.16.55 PM

A “Virtual” or “Digital” Human. Credit: Digital Domain

The #AIShowBiz Summit 3.0 – which took place last month –  sits apart from the often dizzying array of conferences vying for the attention of Bay Area tech natives. Omnipresent AI themes like “applications for deep learning”, “algorithmic fairness”, and “the future of work” are set aside in preference for rather more dazzling conversations on topics like “digital humans”, “AI and creativity”, and “our augmented intelligence digital future.”

It’s not that there’s anything wrong with the big reoccuring AI themes. On the contrary, they are front-and-center for very good reason. It’s that there’s something just a little beguiling about this raft of rather more spacey, futuristic conversations delivered by presenters who are unflinchingly “big picture”, while still preserving necessary practical and technical detail.

Continue reading

Peer pressure: An unintended consequence of AI

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Peer pressure

Last winter, Kylie Jenner tweeted that she stopped using Snapchat, and almost immediately the company’s shares dropped six-percent, losing $1.3 billion in value. Her seemingly innocent comments had led investors to believe that the 20-year-old’s 25 million followers would do the same, and the knock-on effect would seal the social media apps fate as a “has been” among its key demographic of younger women.

This astonishing event demonstrates in technicolor how the notion of influence is evolving, latterly taking on a new significance. In the age of technology, though influence is still associated with power, it is no longer the limited reserve of “the Powerful”—i.e. those in recognized positions of authority, like bankers, lawyers, or politicians.

Continue reading

Parents beware: subtle dangers on the internet

youtubekidsvideo

These days we are often told what we shouldn’t be afraid of when it comes to technological innovation. Efforts to calm us come from all corners, and we can find a healthy dose of reassurance about any given advancement with a quick google.

Just this week, an excellent article by Oxford professor Luciano Floridi was recirculated on the internet. In it, Floridi argues vociferously against those “Singularitarians” who worry about robot rule, and humans being overthrown by AI in the near-ish future.  At the same time, the Independent wrote in some detail to comfort us with regards to the non-threatening nature of iPhone X’s new facial recognition feature.  This came after a wave of speculation about Apple’s plans to build a mass database of facial information. What’s more, we have also seen an increase in cuddly or moving descriptions of new tech, like this inspiring article which examines how VR can be used to help both deaf and hearing individuals understand the other’s experience of music.

I am not suggesting for a second that we shouldn’t be reassured. I think these articles do very important things, whether it be shooting down unsettling, dystopian predictions, or championing some of the wonderful work that is being done to improve our lived experiences with technology. But it is perhaps because of this recent proliferation of positive tech news (if we set aside the politics of tech firms and focus on the tech itself…) that I was shocked to read this article late yesterday. Not least because it reveals a bogey man that I wasn’t even vaguely aware of…  Continue reading

10 real-world ethical concerns for virtual reality

virtual reality.jpeg

There are lots of emerging ideas about how virtual reality (VR) can be used for the betterment of society – whether it be inspiring social change, or training surgeons for delicate medical procedures.

Nevertheless, as with all new technologies, we should also be alive to any potential ethical concerns that could re-emerge as social problems further down the line. Here I list just a few issues that should undoubtedly be considered before we brazenly forge ahead in optimism.

1.   Vulnerability

When we think of virtual reality, we automatically conjure images of clunky headsets covering the eyes – and often the ears – of users in order to create a fully immersive experience. There are also VR gloves, and a growing range of other accessories and attachments. Though the resultant feel might be hyper-realistic, we should also be concerned for people using these in the home – especially alone. Having limited access to sense data leaves users vulnerable to accidents, home invasions, and any other misfortunes that can come of being totally distracted.

2.   Social isolation

There’s a lot of debate around whether VR is socially isolating. On the one hand, the whole experience takes place within a single user’s field-of-vision, which obviously excludes others from physically participating alongside them. On the other hand, developers like Facebook have been busy inventing communal meeting places like Spaces, which help VR users meet and interact in a virtual social environment. Though – as argued –  the latter could be helpfully utilized by the introverted and lonely (e.g. seniors), there’s also a danger that it could become the lazy and dismissive way of dealing with these issues. At the other end of the spectrum, forums like Spaces may also end-up “detaching” users by leading them to neglect their real-world social connections. Whatever the case, studies show that real face-to-face interactions are a very important factor in maintaining good mental health. Substituting them with VR would be ill-advised.

3.   Desensitization

It is a well-acknowledged danger that being thoroughly and regularly immersed in a virtual reality environment may lead some users to become desensitized in the real-world – particularly if the VR is one in which the user experiences or perpetrates extreme levels of violence. Desensitization means that the user may be unaffected (or less affected) by acts of violence, and could fail to show empathy as a result. Some say that this symptom is already reported amongst gamers who choose to play first person shooters or roleplay games with a high degree of immersion.

4.   Overestimation of abilities

Akin to desensitization, is the problem of users overestimating their ability to perform virtual feats just as well in the real-world. This is especially applicable to children and young people who could take it that their expertise in tightrope walking, parkour, or car driving will transfer seamlessly over to non-virtual environments…

5.   Psychiatric

There could also be more profound and dangerous psychological effects on some users (although clearly there are currently a lot of unknowns). Experts in neuroscience and the human mind have spoken of “depersonalization”, which can result in a user believing their own body is an avatar. There is also a pertinent worry that VR might be swift to expose psychiatric vulnerabilities in some users, and spark psychotic episodes. Needless to say, we must identify the psychological risks and symptoms ahead of market saturation, if that is an inevitability

6.   Unpalatable fantasies

If there’s any industry getting excited about virtual reality, it’s the porn industry (predicted to be the third largest VR sector by 2025, after gaming and NFL-related content). The website Pornhub is already reporting that views of VR content are up 225% since it debuted in 2016. This obviously isn’t an ethical problem in and of itself, but it does become problematic if/when “unpalatable” fantasies become immersive. We have to ask: should there be limitations on uber realistic representations of aggressive, borderline-pedophilic, or other more perverse types of VR erotica? Or outside of the realm of porn, to what extent is it okay to make a game out of the events of 9/11, as is the case with the 08.46 simulator?

7.   Torture/virtual criminality

There’s been some suggestion that VR headsets could be employed by the military as a kind of “ethical” alternative to regular interrogatory torture. Whether this is truth or rumor, it nevertheless establishes a critical need to understand the status of pain, damage, violence, and trauma inflicted by other users in a virtual environment – be it physical or psychological. At what point does virtual behavior constitute a real-world criminal act?

8.   Manipulation

Attempts at corporate manipulation via flashy advertising tricks are not new, but up until now they’ve been 2-dimensional. As such, they’ve had to work hard compete with our distracted focus. Phones ringing, babies crying, traffic, conversations, music, noisy neighbors, interesting reads, and all the rest. With VR, commercial advertisers essentially have access to our entire surrounding environment (which some hold has the power to control our behavior). This ramps up revenue for developers, who now have (literally) whole new worlds of blank space upon which they can sell advertising. Commentators are already warning that this could lead to new and clever tactics involving product placement, brand integration and subliminal advertising.

9.   Appropriate roaming and recreation

One of the most exciting selling points of VR is that it can let us roam the earth from the comfort of our own homes. This is obviously a laudable, liberating experience for those who are unable to travel. As with augmented reality, however, we probably need to have conversations about where it is appropriate to roam and/or recreate as a virtual experience. Is it fine for me to wander through a recreation of my favorite celebrity’s apartment (I can imagine many fans would adore the idea!)? Or peep through windows of homes and businesses in any given city street? The answers to some of these questions may seem obvious to us, but we cannot assume that the ethical parameters of this capability are clear to all who may use or develop.

10.   Privacy and data

Last, but not least, the more we “merge” into a virtual world, the more of ourselves we are likely to give away. This might mean more and greater privacy worries. German researchers have raised the concern that if our online avatars mirror our real-world movements and gestures, these “motor intentions” and the “kinetic fingerprints” of our unique movement signatures can be tracked, read, and exploited by predatory entities. Again, it’s clear that there needs to be an open and consultative dialogue with regards to what is collectable, and what should be off-limits in terms of our virtual activities.

This list is not exhaustive, and some of these concerns will be proven groundless in good time. Regardless, as non-technicians and future users, we are right to demand full and clear explanations as to how these tripwires will be averted or mitigated by VR companies.

Facebook accused of limiting, not championing, human interaction

facebook reactions

Facebook have been in a press a lot this week, and there have been a flurry of articles asking how they might be brought back from the brink. The New York Times asked a panel of experts “How to Fix Facebook?”. Some of the responses around the nature of –and limitations to– our user interactions on the social network struck me as very interesting.

Jonathan Albright, Research Director at Columbia University’s Tow Center for Digital Journalism, writes:

“The single most important step Facebook — and its subsidiary Instagram, which I view as equally important in terms of countering misinformation, hate speech and propaganda — can take is to abandon the focus on emotional signaling-as-engagement.

This is a tough proposition, of course, as billions of users have been trained to do exactly this: “react.”

What if there were a “trust emoji”? Or respect-based emojis? If a palette of six emoji-faced angry-love-sad-haha emotional buttons continues to be the way we engage with one another — and how we respond to the news — then it’s going to be an uphill battle.

Negative emotion, click bait and viral outrage are how the platform is “being used to divide.” Given this problem, Facebook needs to help us unite by building new sharing tools based on trust and respect.”

Kate Losse, an early Facebook employee and author of “The Boy Kings: A Journey into the Heart of the Social Network”, suggested:

“It would be interesting if Facebook offered a “vintage Facebook” setting that users could toggle to, without News Feed ads and “like” buttons. (Before “likes,” users wrote comments, which made interactions more unique and memorable.)

A “vintage Facebook” setting not only would be less cluttered, it would refocus the experience of using Facebook on the people using it, and their intentions for communication and interaction.”

According to recent reports, “reactions” are being algorithmically prioritized over “likes”. Why? Well, we might suppose, for the same reason most new features are developed: more and greater insight. Specifically, more insight about our specific emotions pertaining to items in our newsfeed.

Understanding the complexity of something we type in words is difficult. Systems have to understand tone, sarcasm, slang, and other nuances. Instead, “angry”, “sad”, “wow”, “haha”, and “love” make us much easier to interpret. Our truthful reactions are distilled into proxy emojis.

I see two problems with this:

  • The first is that we are misunderstood as users. Distilling all human emotions/reactions into five big nebulous ones is unhelpful. Like many of the old (and largely discredited) psychometric tests questions, these reactions allow us to cut complexity out of our own self-portrayal. This means that, down the line, the data analytics will purport to show more than they actually do. They’ll have a strange and skewed shadow of our feelings about the world. We’ll then, consequently, be fed things that “half match” our preferences and – potentially –change and adapt our preferences to match those offerings. In other words, if we’re already half-misinformed, politically naïve, prejudiced etc., we can go whole hog…
  • The second problem is that discouraging us from communicating our feelings using language, is likely to affect our ability to express ourselves using language. This is more of a worry for those growing up on the social network. If I’m not forced to articulate when I think something is wonderful, or patronizing, or cruel, and instead resort to emojis (“love” or “angry”), then the danger is that I begin to think in terms of mono-emotions. With so many young people spending hours each day on social media, this might not be as far-fetched as it sounds.

If there’s a question-mark over whether social network’s cause behavior change, then it’s fine to be unbothered about these prospects, but given Silicon Valley insiders have recently claimed that the stats are showing our minds “have been hijacked”, then perhaps it’s time to pay some heed to these mechanisms of manipulation.

Will robots make us more robotic?

robot dalek

Anyone who has taken public transport in San Francisco will tell you: it is not strange and unusual to encounter the strange and unusual. Every day is one of eyebrow-raising discovery. That said, I surprised myself recently when I became slightly transfixed –and perhaps a little perplexed– listening to someone narrate a text message into their smartphone.

The expressionless and toneless girl carefully articulated each word: “I can’t believe she told you”, she said aloud like a Dalek, “LOL”. How odd it seemed to see someone sat, stoney-faced, proclaiming that they were “laughing out loud” when nothing could be further from the truth.

Now, I have a limited background in acting, and I worked in-and-around politics for several years, so believe me when I say I’ve heard people speak robotically without any conviction or inflection before. But those people were reading scripts, or trying to remember their lines-to-take, or trotting out meaningless straplines. They weren’t expressing their own thoughts and feelings to a friend.

Then yesterday, I stumbled across this blog about the evolution of interactions in the age of AI. In a rather sweet anecdote, the author talks about ordering his Alexa to “turn off the lights” and his young son questioning his manners and the absence of “please”. He goes on to ponder the future and how we might incorporate manners and niceties when instructing our digital assistants, lest we inhibit their range by limiting the vocabulary we use with them.

My thoughts went elsewhere. Though AI is developing to understand our expressions and feelings, it feels like we also have some evolving to do before we become used to addressing artificial systems as we would other humans. Moreover, with voice instructions and narrated text, there seems little need for sincerity or emotion. The text itself is either directly indicative – or free – of sentiment.

Where I’m getting to is this: might we humans begin to develop specific a type of robotic tone exclusively for non-social instructive language? For Alexa and co.? We already have a tone and style we reserve for babies, pets and (sometimes) older relatives. Will a whole new style of monosyllabic speech emerge for the purposes of closing garage doors, sending text messages, ordering plane tickets and browsing TV channels? A sort of anti-baby talk?

It’s fun to speculate about these things, and I’m certainly no linguist, but it’s difficult to see that the voice-activated future we’re promised wouldn’t have some implications for modes of speech. Will we flatten our language, or to the contrary, become hyper-expressive? We’re yet to find out, but we can only hope that the beautiful languages of the world aren’t somehow roboticized as we adapt to hands-free technologies and AI assistants.

Will Facebook push non-sponsored content to the margins?

facebook

Facebook are currently running trials which demote non-promoted content to a secondary feed, according to the Guardian. The experiment is being run in six countries – including Slovakia, Serbia, and Sri Lanka – and apparently follows calls from users who want to be able to see their friends’ posts more easily.  The test involves two feeds, with the primary feed exclusively featuring posts by friends alongside paid-for content.

Already smaller publishers, Facebook pages, and Buzzfeed-like sites which rely upon organic social traffic, are reporting a drop in engagement of 60-80%.

The article says:

“Notably, the change does not seem to affect paid promotions: those still appear on the news feed as normal, as do posts from people who have been followed or friended on the site. But the change does affect so called “native” content, such as Facebook videos, if those are posted by a page and not shared through paid promotion.”

Experts predict that the move will hit much of the current video content which makes it into our feeds, plus the likes of the Huffington Post and Business Insider. Quite simply, Facebook seems to want to cleanse our feeds of low value content, and encourage media outlets to pay up…

Though the social media platform states it has no plans to roll this out globally, we might reasonably assume that this trial serves some purpose. And who can blame Facebook for experimenting, given the backlash they’ve had recently over so-called “fake news”? The trouble is, here we have another example of an internet giant acting to narrow our online field of vision: if we are only served promoted content, then we are served a skewed and unrepresentative view of the world. The dollar dictates, rather than organic enthusiasm…

Additionally, though our feeds are often cluttered with fake news, mindless cat videos and other questionable content, amongst non-promoted material we also find important movements. These range from social campaigns and awareness drives, to challenging and diverse voices that diverge from mainstream opinion. Some are pernicious, but many are precious, and Facebook ought to be careful they don’t throw the baby out with the bath water.

It’s an admirable thing to respond to the wants and needs of users, and we shouldn’t be too quick to criticize Facebook here. We just need to be sure that giving clarity doesn’t mean imposing homogeneity.

Life imitating art: China’s “Black Mirror” plans for Social Credit System

social credit system

Yesterday, both Wired and the Washington Post wrote extensively about plans the Chinese government have to use big data to track and rank their citizens. The proposed Social Credit System (SCS) is currently being piloted with a view to a full rollout in 2020. Like a real-life episode of Charlie Brooker’s dystopian Black Mirror series, the new system incentivizes social obedience whilst punishing behaviors which are not deemed becoming of a “good citizen”. Here’s the (terrifying) run down:

  • Each citizen will have a “citizen score” which will indicate their trustworthiness. This score will also be publicly ranked against the entire population, influencing prospects for jobs, loan applications, and even love.
  • Eight commercial partners are involved in the pilot, two of which are data giants with interests in social media and messaging, loans, insurance, payments, transport, and online dating.
  • Though the “complex algorithm” used to generate a score by partner Sesame Credit has not been revealed, we do know there are five factors being taken into account:
    1. Credit history
    2. Ability to fulfil contract obligations
    3. The verification of “personal characteristics” (e.g. phone number, address etc.)
    4. Behavior and preference
    5. Interpersonal relationships
  • “Behavior and preferences” considers patterns of behavior and how they reflect upon the individual. For example, someone who plays ten hours of video games each day would be considered idle, whereas someone who buys lots of diapers would be considered a responsible parent.
  • “Interpersonal relationships” allows assessors to rate interactions between friends and family. Nice messages about the government are likely to help your score, but it can also be negatively affected by things your friends post online.

Black mirror

How do incentives work?

Well, just like the “NoseDive” episode of Black Mirror, there big benefits for model citizens:

  • 600 points: Congrats! Take out a Just Spend loan of up to 5,000 yuan (for use on the scheme’s partner sites).
  • 650 points: Hurrah! You can rent out a car without placing a deposit, experience faster check-ins at hotels and even experience VIP check-in at Beijing Airport.
  • 666 points +: There’s nothing sinister about this threshold! Enjoy! You can take out a loan of up to 50,000 yuan (from a partner organization).
  • 700 points: Yowzers! You can go to Singapore without armfuls of supporting documentation.
  • 750 points: Big dog! You can be fast-tracked in applying for a pan-European Schengen visa.

What about bad citizens?

If you fall short of government expectations, you can expect to know about it. Here’s how they plan to lower your quality of life:

  • Difficulty renting cars
  • Poor employment opportunities (including being forbidden from some jobs)
  • Issues borrowing money from legitimate lenders
  • Slower internet speeds
  • Restricted access to restaurants, nightclubs and golf clubs
  • Less likely to get a date (high points profiles are more prominent on dating websites)
  • Removal of the right to travel freely abroad
  • Problems with securing rental accommodation
  • Restrictions enrolling children in certain schools

You can read more detail and commentary here, but I’ve tried to present the basics.

This system takes no excuses and makes no effort to collect feedback. If, unpreventably, your score suffers a knock, then it is simply “tough luck”. It’s not difficult to see how it will entrench disadvantage and, in all likelihood, create a delineated two-tier society.

If someone you’re connected to (perhaps a relative) reduces your score by behaving “inappropriately” online or over a messenger, this could lead to your being denied a job, which in turn will reduce your chances of gaining credit, getting a rental apartment, a partner…etc etc. It’s difficult to escape the domino effect or imagine how an individual might recover enough to live a decent life in a system where each misdemeanor seems to lead to another compounding circumstance.

We can legitimately speculate that Chinese society, from 2020, will be one in which citizens heavily police each other, disconnect themselves (in every way) from the poor/low-scoring, report indiscretions at the drop of a hat for fear of association and reprisals, and adopt phoney behaviors in order to “game” their way to full state approval. Some have described it as a form of “nudging”, but nudge techniques still leave room for choice. This seems much more coercive.

Finally, some have argued that, although the Chinese SCS system seems extreme, it actually employs techniques that are already being used by internet giants to map our own behaviors as we speak. The Chinese system simply adds a positive or negative valence to these actions and distills them into a single score. Therefore, it is worth considering which elements of SCS we find unpalatable – if any at all – and reflecting upon whether we already assent to, or participate in, similar evaluations already…

Are we being made into 21st century “puppets” by our online masters?

smartphone.jpg

In a recent Guardian article, ex-Google strategist James Williams describes the persuasive, algorithmic tools of the internet giants – like Facebook’s newsfeed, Google’s search results, etc. – as the “largest, most standardized and most centralized form of attentional control in human history”. He is not alone in his concern. Increasingly, more interest is being taken in the subtle tactics that social media and other platforms use to attract and keep our attention, guide our purchasing decisions, control what we read (and when we read it), and generally manipulate our attitudes and behaviors.

The success of platforms like Facebook and Twitter has really been down to their ability to keep us coming back for more. For this, they have turned habit formation into a technological industry. Notifications, “likes”, instant play videos, messengers, Snapstreaks – these are but a few of the ways in which they lure us in and, critically, keep us there for hours at a time. According to research, on average we touch or swipe our phones 2,617 per day. In short, most of us are compulsive smartphone addicts. So much so, that whole new trends are being built around shunning phones and tablets with the hopes of improving our focus on other, arguably more important, things like physical interactions with our friends and family.

Nevertheless, such movements are unlikely to inspire an overnight U-turn when it comes to our online habits. There are whole new generations of people who have been born into this world and do not know anything other than smartphone/tablet compulsion.  This point is made beautifully by Jean-Luis Constanza, a top telecoms executive who uploaded a YouTube video of his baby daughter prodding at images in a magazine. He comments: “In the eyes of my one-year old daughter, a magazine is a broken iPad. That will remain the case throughout her entire life. Steve Jobs programmed part of her operating system.”

Consequently, the internet giants (by which I mean Facebook, Google, Twitter, Apple, Snapchat, etc.) have an enormous amount of power over what we see and read, and consequently what we buy, how we vote, and our general attitudes to people, places, and things. Concerned parties argue that these company’s current methods of subtly manipulating what they push out to us, and what they conceal from us, could equate to an abuse of their ethical responsibility. There is a power asymmetry which perhaps leads to Joe Public becoming de-humanized, as well as treated as sort of “techno-subjects” for the experimental methods of big tech.

Most of what allows these firms to know so much about us, and then to capitalize on this granular knowledge, is the constant feedback loop which supplies the metrics, which in in-turn enable the algorithms to change and adapt what we are served on the internet. This is something we willingly participate in. The feedback comprises of data about what we’ve clicked, shared, browsed, liked, favorited, or commented on it the past.  This same loop can also be used to anticipate what we might like, and to coerce us into new decisions or to react to different stimuli which – you guessed it – supplies them with even more information about “people like us”. The constant modification and refinement of our preferences, it is argued, not only creates a sort of filter bubble around us, but also stifles our autonomy in terms of limiting the options being made available to us. Our view is personalized for us based on secret assumptions that have been made about us…and, of course, commercial objectives.

Karen Yeung, of the Dickson Pool of Law at King’s College London, calls such methods of controlling what we’re exposed to digital decision guidance processes – also known by the rather jazzier title, algorithmic hypernudge. The latter pays homage to the bestselling book “Nudge” by Cass Sunstein and Richard Thaler, which talks about the ways in which subtle changes to an individual’s “choice architecture” could cause desirable behavior changes without the need for regulation. For example, putting salads at eye level in a store apparently increases the likelihood we will choose salad, but doesn’t forbid us from opting for a burger. It is a non-rational type of influence. What makes the online version of nudge more pernicious, according to Yeung, is that, a) the algorithms behind a nudge on Google or Facebook are not working towards some admirable societal goal, but rather they are programmed to optimize profits, and b) the constant feedback and refinement allows for a particularly penetrating and inescapable personalization of the behavior change mechanisms. In short, it is almost like a kind of subliminal effect, leading to deception and non-rational decision-making which, in Yeung’s words: “express contempt and disrespect for individuals as autonomous.”

So, given that our ability to walk away is getting weaker, are we still in control? Or are we being manipulated by other forces sat far away from most of us in California offices? Silicon Valley “conscience” Tristan Harris is adamant about the power imbalance here: “A handful of people, working at a handful of technology companies, through their choices will steer what a billion people are thinking today. I don’t know a more urgent problem than this.” Harris says there “is no ethics” and vast reams of information these giants are privy to could also allow them to exploit the vulnerable.

This is a big topic with lots of work to be done, but perhaps the key to understanding whether not we are truly being manipulated is to understand in what way methods like algorithmic hypernudge undermine our reason (Williams says that they cause us to privilege impulse over reason). If we are being coerced into behaving in ways that fall short of our expectations or standards of human rationality, then it seems obvious there are follow-on ethical implications. If I do things against my will and my own better judgment – or my process of judgment is in some way compromised – it seems fair to say I am being controlled by external forces.

But perhaps that is not enough, after all, external influences have always played into our decision-making. From overt advertising, to good smelling food, to the way something (or someone!) looks. We are already accustomed to making perfectly rational decisions on the basis of non-rational influences. Just because we behave in a way that we didn’t originally plan, doesn’t mean to say that the action is itself irrational. That isn’t to say that there isn’t something going on – apparently 87% of people go to sleep and wake up with their smartphones – it is just to point out that if we’re going to use claims of psychological manipulation, we also need to be clear in where this happens and how it manifests itself. Perhaps most importantly, we need to properly identify how the consequences differ significantly from other types of unconscious persuasion.  When and how are these online influences harming us…? That’s the question.