Six ethical problems for augmented reality

AR

This week the BBC reported that the augmented reality (AR) market could be worth £122 billion ($162 billion) by 2024. Indeed, following the runaway success of Pokémon GO, Apple and Google have launched developer kits, and it’s now beginning to look as though the blending of real and virtual worlds will be part of our future.

Although we get excited at the prospect of ‘fun and factoids’ spontaneously popping-up in our surrounding environment, it’s also important to ask where the boundaries lie when it comes to this virtual fly-postering. Does anything go? Here are just a few of the moral conundrums* facing those looking to capitalize on this new market channel:

  1. The hijacking of public spaces

Public spaces belong to us all, and firms could easily upset the public if they use brash augmentation to adorn cherished local monuments or much-loved vistas. Even if your AR placement isn’t overtly controversial (remember the storm over the insensitive placement of Pokéstops?), it could be characterized as virtual graffiti if it isn’t appropriate and respectful. Though AR is only accessible through a device, we might see a future where the public can veto certain types of augmentation to preserve the dignity of their local environs. This will be especially pertinent if all AR ends up sharing a single platform.

  1. Parking up on private land

A business wouldn’t throw up yard sign on a family’s private lawn without asking, so why should AR be any different? Though the law may take a while to figure out its stance on AR, there are sentient ethical concerns that shouldn’t be ignored. Should a burger chain be able to augment a house where the residents are Hindu? Is it okay to transpose publicly available census information onto private houses? It seems as though private owners could have rights over the virtual space that surrounds their property.

  1. Precious anonymity

The free-to-download Blippar app already boasts how it can harness “powerful augmented reality, facial recognition, artificial intelligence and visual search technologies”, allowing us to use our phone cameras to unlock information about the world around us. At present, they encourage us to look-up gossip about famous faces spotted on TV or in images, but there is clearly the very real prospect of AR technology identifying individuals in the street. If this information can be cross-referenced with other available records, then AR could blow holes through personal anonymity in public places.

  1. Who should be able to augment?

Many distasteful things lurk on the internet, from extreme adult content to unpalatable political and religious views. At the moment, such sites are outside the direct concern of the general public who are rarely, if ever, exposed to their exotic material – but AR could change this. If many different AR platforms start to evolve, ordinary folks could find themselves, their homes, their neighborhoods, and their cities used as the backdrop for morally questionable material. Are we okay with any type of image augmenting a nursery or a church, so long as it is “only virtual”?

  1. Leading users by the nose

The Pokémon GO game has led to a number of high profile incidents, including the death of players. So much so, that there is a Pokémon GO Death Tracker which logs the details of each accident. Though it might be a stretch to hold game developers responsible for careless individuals and avoidable tragedies, to what extent should companies using AR be compelled to understand the environments they are augmenting (where their product is location specific)? Should they know if they’re leading users into dangerous neighborhoods, onto busy roads, or to places where the terrain is somehow unsafe?

  1. Real or faux?

Though we might be a little way off yet, if AR experiences become the norm we may see accusations of deception in cases where the real and the virtual aspects of the experience become indistinguishable. Should there be some way to indicate to users which parts of an AR experience are fake if it isn’t entirely clear? What if convincing or compelling augmentation leads to serious confusion amongst vulnerable members of society (e.g. children and the mentally disabled)?

We might be on the cusp of something newly useful and thrilling (imagine being able to uncover facts about the world around us just by pointing a phone camera!), but it’s important that those developing AR think through all of the implications for individuals and society before a virtual Pandora’s box springs open.

*Some of the ideas here are inspired by an excellent paper by the philosopher Erica Neely.

 

What if Twitter could help predict a death?

I want to use this blog to look at how data and emerging technologies affect us – or more precisely YOU. As a tech ethics researcher, I’m perpetually reading articles and reports that detail the multitude of ways in which data can be used to anticipate bad societal outcomes: criminality, abuse, corruption, disease, mental health, etc etc. Some of these get oxygen, some of them don’t. Some of them have integrity, some don’t. Often these tests, analyses, and studies identify problems that gesture toward ethically “interesting” solutions.

Just today this article caught my attention. It details a Canadian study that tries to get to grips with an endemic problem: suicide in young people. Just north of the border, suicide causes no fewer than 24% of deaths amongst those aged between 15 and 24 (Canadian Mental Health Association). Clearly, this is not a trivial issue.

In response, a group of researchers have tried to determine the signs of self-harm and suicide by studying the social media posts of those in the most vulnerable age bracket. The team – from SAS Canada – have even speculated that, “these new sources could provide early indication of possible trends to guide more formal surveillance activities.” So, with the prospect of officialdom being dangled before us, it’s important to ask how this social media analysis works. In short, might any one of us land-up being surveilled as a suicide risk if we happen to make a trigger comment or two on Twitter?

Well the answer seems to be “possibly”. This work harvested 2.3 million tweets, of which 1.1 million were identified as “likely to have been authored by 13 to 17-year-olds in Canada”. This determination was made by a machine learning model that has been trained to predict age by relying on the way young people use language. So, if the algorithm thinks you tweet like a teenager, you’re potentially on the hook. From there, the team looked for where these tweets related to depression and suicide, and “picked some specific buzzwords and created topics around them, and our software mined those tweets to collect the people.”social media

Putting aside the undoubtedly harrowing idea of people collection, it’s important to highlight the usefulness of this survey. The data scientists involved insist that the data they’ve collected can help them narrow down the Canadian regions which have a problem (although one might contest that the suicide statistics themselves should reveal this), and/or identify a particular school or a time of year in which the tell-tale signs are more widespread or stronger. This in turn can help better target campaigns and resources, which – of course – is laudable, particularly if it is an improvement on existing suicide statistics. It only starts to get ethically icky once we consider what further steps might be taken.

The technicians on the project speculate as to how this data might be used in the future. Remember, we are not dealing with anonymized surveys here, but real teen voices “out in the wild”: “He (data expert Jos Polfliet) envisions the solution being used to find not only at-risk teens, but others too, like first responders and veterans who may be considering suicide.”

Eh? Find them? Does that mean it might be used to actually locate real people based on what they’ve tweeted on their personal time? As with many well-meaning data projects, everything suddenly begins to feel a little Minority Report at this point. Although this study is quite obviously well-intentioned, we are fooling ourselves if we don’t acknowledge the levels of imprecision we’re dealing with here.

Firstly, without revealing the actual identities of every account holder picked-out by the machine learning, we have no way of knowing the levels of accuracy these researchers have hit upon when it comes to monitoring 13-17 year-olds. Although the use of certain language and terminologies might be a good proxy for the age of the user, it certainly isn’t an infallible one in the wacky world of the internet.

Secondly, the same is true of suicide and depression-related buzzwords. Using a word or phrase typically associated with teen suicide is not a sufficient condition for a propensity towards suicide (indeed, it is unlikely to even be a necessary condition). As Seth Stephens-Davidowitz discussed in his new book Everybody Lies: Big Data, New Data, And What the Internet Can Tell Us About Who We Really Are, in 2014 research found that there were 6,000 Google searches for the exact “how to kill your girlfriend” and yet there were “only” 400 murders of girlfriends. In other words, not everyone who vents on the internet is in earnest, and many who are earnest in their intentions may not surface on the internet at all. So, in short, we don’t know exactly what we’ve got when we look at these tweets.

Lastly, without having read the full methodology, it appears that these suicide buzzwords were hand-picked by the team. In other words, they were selected by human beings, presumably based on what sorts of things they deemed suicidal teens might tweet. Fair enough, but not particularly scientific. In fact, this sort of process can be riddled with guesswork and human bias. How could you possibly know with any certainty, even if instructed by a physician or psychiatrist, exactly which kinds of words of phrases denote true intention and which denote teenage angst?

Hang on a second – you might protest – these buzzwords may have been chosen by a very clever, objective algorithm? Yet, even if a clever algorithm could somehow ascertain the difference between a “I hate my life” tweeted by a genuinely suicidal teen and a “I hate my life” tweeted by a tired and hormonal teenager (perhaps based on whatever language it was couched in), to make this call it would have to have been trained on data which used the tweets of teens who have either a) committed suicide or b) have been diagnosed/treated for depression. To harvest such tweets, the data would have to rely upon more than Twitter alone… all information would have to be cross-referenced with other databases (like medical records) in ways that would undoubtedly de-anonymize.

So, with no guarantees of accuracy, the prospect of physical intervention by social services or similar feels like a scary one – as is the idea of ending up on a watchlist because of a bad day at school. Particularly when we don’t know how this data would be propagated forward…

Critically, I am not trying to say that the project isn’t useful, and SAS Canada are forthcoming in their acknowledgment that ethical conversations that need to take place. Nevertheless, this feels like the usual ethical caveat which acts as a disclaimer on work that has already taken place and – one might reasonably assume – is already informing actions, policies, and future projects.

Some of the correlations this work has unveiled clearly have some value, for example, there is a 39% overlap between conversations about suicide and conversations about bullying. This is a broad trend and a helpful addition to an important narrative. Where it becomes unhelpful, however, is when it enables and/or legitimizes the external surveillance of all bullying-related conversations on social media and – to carry that thought forward – some kind of ominous, state sanctioned “follow-up” for selected individuals…