Trying on wearables

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

MannGlas_and_GoogleGlass1_crop.jpg

The dramatic failures of Google Glass and Snapchat Spectacles demonstrated the countless challenges faced by wearable technologies. Beyond the ubiquitous activity trackers and smartwatches, wearable consumer products have yet to yield a mass-market success. Though the idea of wearable tech and human-technology synergy still gets marketers excited, product designers have yet to hit upon a breakout device that will prove as popular and indispensable as blue jeans. Still, the allure of developing such a product remains irresistible.

Continue reading

Designing for Bad Intentions: Wearables and Cyber Risks

YouTheData.com is delighted to feature a guest post by John Gray, the co-founder of MentionMapp Analytics. John is a media researcher and entrepreneur exploring how issues like the spread of misinformation, and the exploitation of personal privacy are eroding trust in our social institutions and discourse. He’s written numerous case studies and has co-authored “The Ecosystem of Fake: Bots, Information and Distorted Realities.” 

Wearable.jpg

It’s the bad people with bad intent that’s causing the problem, not technology” – Shane Luke, Sr. Director of Digital Innovation, Nike

We exude data, like the sweat that streams off our skin. It’s the norm. Just as another new normal is the news of the latest PR tour by data breach apologists full like empty promises of we’ll do better”. Like the soles of an ultra-marathoners shoes, the cliched technocratic mind-set of “moving fast, breaking things” and “asking for forgiveness rather than permission”, is beginning to wear thin.

We accept the devices in our pockets, and on our wrists, feet, and even our faces are communicating data. Yet the data they produce becomes a target for bad-actors. As technology weaves deeper into what we wear, there’s more to our fashion statements than meets the eye.

Continue reading

Why Can’t We #DeleteFacebook?: 4 Reasons We’re Reluctant

Facebook Addiction

The Cambridge Analytica scandal is still reverberating in the media, garnering almost as much daily coverage as when the story broke in The New York Times on March 17. Facebook’s mishandling of user data has catalyzed a collective public reaction of disgust and indignation, and perhaps the most prominent public manifestation of this is the #DeleteFacebook movement. This vocal campaign is urging us to do exactly what it says: To vote with our feet. To boycott. To not just deactivate our Facebook accounts, but to eliminate them entirely. Continue reading

Ready To Be “Deepfaked”? 3 Reasons You Should Be Concerned About The Internet’s Creepiest Data Heist

fake

Fraudsters typically line their pockets by forging our signatures, cloning our credit cards, and stealing our personal identities. Yet, we’d like to think that folks who know us personally – our family, friends, colleagues, and acquaintances – would catch these counterfeiters out if they brazenly claimed to be us in public. After all, seeing is believing isn’t it? If you don’t look like me, you’re not me. If you do look like me, the chances are that you are me. Right?

Well…maybe. And this could soon become the subject of some confusion.

But how?

Well, imagine if stealing your identity could include stealing your image. And if scammers could then use that image to put words in your mouth and – in some cases – fake your very actions. This isn’t just some outlandish thought experiment, but a foreseeable hazard if we fail to prepare for a surge in the production of “deepfakes”.  Continue reading

In the future, we could solve all crime. But at what cost?

It’s difficult to read, or even talk about technology at the moment without that word “ethics” creeping in. How will AI products affect users down-the-line? Can algorithmic decisions factor in the good of society? How might we reduce the number of fatal road collisions? What tools can we employ to prevent or solve all crime?

surveillance

Now, let’s just make it clear from the off: these are all entirely honorable motives, and their proponents should be lauded. But sometimes even the drive toward an admiral aim – the prevention bad consequences – can ignore critical tensions that have been vexing thinkers for years.

Even if we agree that the consequences of an act are of real import, there are still other human values that can – and should – compete with them when we’re assimilating the best course of action. Continue reading

If you aren’t paying, are your kids the product?

There’s a phrase – from where I don’t know – which says: “If you aren’t paying, you’re the product.”  Never has this felt truer than in the context of social media. Particularly Facebook, with its fan-pages and features, games and gizmos, plus never-ending updates and improvements. Who is paying for this, if not you…and what are they getting in return? The answer is actually quite straightforward.

children

Continue reading

Facebook wants you naked…and it’s for your own good

revenge porn

***UPDATE: Contrary to yesterday’s reporting, the BBC has now corrected its article on Facebook’s new “revenge porn” AI to include this rather critical detail:

“Humans rather than algorithms will view the naked images voluntarily sent to Facebook in a scheme being trialled in Australia to combat revenge porn. The BBC understands that members of Facebook’s community operations team will look at the images in order to make a “fingerprint” of them to prevent them being uploaded again.”

So now young victims will have the choice of mass humiliation, or faceless scrutiny… Continue reading

10 real-world ethical concerns for virtual reality

virtual reality.jpeg

There are lots of emerging ideas about how virtual reality (VR) can be used for the betterment of society – whether it be inspiring social change, or training surgeons for delicate medical procedures.

Nevertheless, as with all new technologies, we should also be alive to any potential ethical concerns that could re-emerge as social problems further down the line. Here I list just a few issues that should undoubtedly be considered before we brazenly forge ahead in optimism.

1.   Vulnerability

When we think of virtual reality, we automatically conjure images of clunky headsets covering the eyes – and often the ears – of users in order to create a fully immersive experience. There are also VR gloves, and a growing range of other accessories and attachments. Though the resultant feel might be hyper-realistic, we should also be concerned for people using these in the home – especially alone. Having limited access to sense data leaves users vulnerable to accidents, home invasions, and any other misfortunes that can come of being totally distracted.

2.   Social isolation

There’s a lot of debate around whether VR is socially isolating. On the one hand, the whole experience takes place within a single user’s field-of-vision, which obviously excludes others from physically participating alongside them. On the other hand, developers like Facebook have been busy inventing communal meeting places like Spaces, which help VR users meet and interact in a virtual social environment. Though – as argued –  the latter could be helpfully utilized by the introverted and lonely (e.g. seniors), there’s also a danger that it could become the lazy and dismissive way of dealing with these issues. At the other end of the spectrum, forums like Spaces may also end-up “detaching” users by leading them to neglect their real-world social connections. Whatever the case, studies show that real face-to-face interactions are a very important factor in maintaining good mental health. Substituting them with VR would be ill-advised.

3.   Desensitization

It is a well-acknowledged danger that being thoroughly and regularly immersed in a virtual reality environment may lead some users to become desensitized in the real-world – particularly if the VR is one in which the user experiences or perpetrates extreme levels of violence. Desensitization means that the user may be unaffected (or less affected) by acts of violence, and could fail to show empathy as a result. Some say that this symptom is already reported amongst gamers who choose to play first person shooters or roleplay games with a high degree of immersion.

4.   Overestimation of abilities

Akin to desensitization, is the problem of users overestimating their ability to perform virtual feats just as well in the real-world. This is especially applicable to children and young people who could take it that their expertise in tightrope walking, parkour, or car driving will transfer seamlessly over to non-virtual environments…

5.   Psychiatric

There could also be more profound and dangerous psychological effects on some users (although clearly there are currently a lot of unknowns). Experts in neuroscience and the human mind have spoken of “depersonalization”, which can result in a user believing their own body is an avatar. There is also a pertinent worry that VR might be swift to expose psychiatric vulnerabilities in some users, and spark psychotic episodes. Needless to say, we must identify the psychological risks and symptoms ahead of market saturation, if that is an inevitability

6.   Unpalatable fantasies

If there’s any industry getting excited about virtual reality, it’s the porn industry (predicted to be the third largest VR sector by 2025, after gaming and NFL-related content). The website Pornhub is already reporting that views of VR content are up 225% since it debuted in 2016. This obviously isn’t an ethical problem in and of itself, but it does become problematic if/when “unpalatable” fantasies become immersive. We have to ask: should there be limitations on uber realistic representations of aggressive, borderline-pedophilic, or other more perverse types of VR erotica? Or outside of the realm of porn, to what extent is it okay to make a game out of the events of 9/11, as is the case with the 08.46 simulator?

7.   Torture/virtual criminality

There’s been some suggestion that VR headsets could be employed by the military as a kind of “ethical” alternative to regular interrogatory torture. Whether this is truth or rumor, it nevertheless establishes a critical need to understand the status of pain, damage, violence, and trauma inflicted by other users in a virtual environment – be it physical or psychological. At what point does virtual behavior constitute a real-world criminal act?

8.   Manipulation

Attempts at corporate manipulation via flashy advertising tricks are not new, but up until now they’ve been 2-dimensional. As such, they’ve had to work hard compete with our distracted focus. Phones ringing, babies crying, traffic, conversations, music, noisy neighbors, interesting reads, and all the rest. With VR, commercial advertisers essentially have access to our entire surrounding environment (which some hold has the power to control our behavior). This ramps up revenue for developers, who now have (literally) whole new worlds of blank space upon which they can sell advertising. Commentators are already warning that this could lead to new and clever tactics involving product placement, brand integration and subliminal advertising.

9.   Appropriate roaming and recreation

One of the most exciting selling points of VR is that it can let us roam the earth from the comfort of our own homes. This is obviously a laudable, liberating experience for those who are unable to travel. As with augmented reality, however, we probably need to have conversations about where it is appropriate to roam and/or recreate as a virtual experience. Is it fine for me to wander through a recreation of my favorite celebrity’s apartment (I can imagine many fans would adore the idea!)? Or peep through windows of homes and businesses in any given city street? The answers to some of these questions may seem obvious to us, but we cannot assume that the ethical parameters of this capability are clear to all who may use or develop.

10.   Privacy and data

Last, but not least, the more we “merge” into a virtual world, the more of ourselves we are likely to give away. This might mean more and greater privacy worries. German researchers have raised the concern that if our online avatars mirror our real-world movements and gestures, these “motor intentions” and the “kinetic fingerprints” of our unique movement signatures can be tracked, read, and exploited by predatory entities. Again, it’s clear that there needs to be an open and consultative dialogue with regards to what is collectable, and what should be off-limits in terms of our virtual activities.

This list is not exhaustive, and some of these concerns will be proven groundless in good time. Regardless, as non-technicians and future users, we are right to demand full and clear explanations as to how these tripwires will be averted or mitigated by VR companies.

Five concerns about government biometric databases and facial recognition

face recognition

Last Thursday, the Australian government announced its existing “Face Verification Service” would be expanded to include personal images from every Australian driver’s license and photo ID, as well as from every passport and visa. This database will then be used to train facial recognition technology so that law enforcers can identify people within seconds, wherever they may be – on the street, in shopping malls, car parks, train stations, airports, schools, and just about anywhere that surveillance cameras pop-up…

Deep learning techniques will allow the algorithm to adapt to new information, meaning that it will have the ability to identify a face obscured by bad lighting or bad angles…and even one that has aged over several years.

This level of penetrative surveillance is obviously unprecedented, and is being heavily criticized by the country’s civil rights activists and law professors who say that Australia’s “patchwork” privacy laws have allowed successive governments to erode citizens’ rights. Nevertheless, politicians argue that personal information abounds on the internet regardless, and that it is more important that measures are taken to deter and ensnare potential terrorists.

However worthy the objective, it is obviously important to challenge such measures by trying to understand their immediate and long-term implications. Here are five glaring concerns that governments mounting similar initiatives should undoubtedly address:

  1. Hacking and security breaches

The more comprehensive a database of information is, the more attractive it becomes to hackers. No doubt the Australian government will hire top security experts as part of this project, but the methods of those intent on breaching security parameters are forever evolving, and it is no joke trying to mount a defense. Back in 2014 the US Office of Personnel Management (OPM) compromised the personal information of 22 million current and former employees due to a Chinese hack, which was one of the biggest in history. Then FBI Director James Comey said that the information included, “every place I’ve ever lived since I was 18, every foreign travel I’ve ever taken, all of my family, their addresses.”

  1. Ineffective unless coverage is total

Using surveillance, citizen data and/or national ID cards to track and monitor people in the hopes of preventing terrorist attacks (the stated intention of the Aussie government) really requires total coverage, i.e. monitoring everyone all of the time. We know this because many states with mass (but not total) surveillance programs – like the US – have still been subject to national security breaches, like the Boston Marathon bombing. Security experts are clear that targeted, rather than broad surveillance, is generally the best way to find those planning an attack, as most subjects are already on the radar of intelligence services. Perhaps Australia’s new approach aspires to some ideal notion of total coverage, but if it isn’t successful at achieving this, there’s a chance that malicious parties could evade detection by a scheme that focuses its attentions on registered citizens.

  1. Chilling effect

Following that last thought through, in the eyes of some, there is a substantial harm inflicted by this biometrically-based surveillance project: it treats all citizens and visitors as potential suspects. This may seem like a rather intangible consequence, but that isn’t necessarily the case. Implementing a facial recognition scheme could, in fact, have a substantial chilling effect. This means that law-abiding citizens may be discouraged from participating in legitimate public acts – for example, protesting the current government administration – for fear of legal repercussions down-the-line. Indeed, there are countless things we may hesitate to do if we have new concerns about instant identifiability…

  1. Mission creep

Though current governments may give their reassurances about the respectful and considered use of this data, who is to say what future administrations may wish to use it for? Might their mission creep beyond national security, and deteriorate to the point at which law enforcement use facial recognition at will to detain and prosecute individuals for very minor offenses? Might our “personal file” be updated with our known movements so that intelligence services have a comprehensive history of where we’ve been and when? Additionally, might the images used to train and update algorithms start to come from non-official sources like personal social media accounts and other platforms? Undoubtedly, it is already easy to build-up a comprehensive file on an individual using publically available data, but many would argue that governments should require a rationale – or even permission – for doing so.

  1. False positives

As all data scientists know, algorithms working with massive datasets are likely to produce false positives, i.e. such a system as proposed may implicate perfectly innocent people for crimes they didn’t commit. This has also been identified as a problem with DNA databases. The sheer number of comparisons that have to be run when, for instance, a new threat is identified, dramatically raises the possibility that some of the identifications will be in error. These odds increase if, in the cases of both DNA and facial recognition, two individuals are related. As rights campaigners point out, not only is this potentially harrowing for the individuals concerned, it also presents a harmful distraction for law enforcement and security services who might prioritize seemingly “infallible” technological insight over other useful, but contradictory leads.

Though apparently most Australians “don’t care” about the launch of this new scheme, it is morally dangerous for governments to take general apathy as a green light for action. Not caring can be a “stand-in” for all sorts of things, and of course most people are busy leading their lives. Where individual citizens may not be concerned to thrash out the real implications of an initiative, politicians and their advisors have an absolute responsibility to do so – even where the reasoning they offer is of little-to-no interest to the general population.

Six ethical problems for augmented reality

AR

This week the BBC reported that the augmented reality (AR) market could be worth £122 billion ($162 billion) by 2024. Indeed, following the runaway success of Pokémon GO, Apple and Google have launched developer kits, and it’s now beginning to look as though the blending of real and virtual worlds will be part of our future.

Although we get excited at the prospect of ‘fun and factoids’ spontaneously popping-up in our surrounding environment, it’s also important to ask where the boundaries lie when it comes to this virtual fly-postering. Does anything go? Here are just a few of the moral conundrums* facing those looking to capitalize on this new market channel:

  1. The hijacking of public spaces

Public spaces belong to us all, and firms could easily upset the public if they use brash augmentation to adorn cherished local monuments or much-loved vistas. Even if your AR placement isn’t overtly controversial (remember the storm over the insensitive placement of Pokéstops?), it could be characterized as virtual graffiti if it isn’t appropriate and respectful. Though AR is only accessible through a device, we might see a future where the public can veto certain types of augmentation to preserve the dignity of their local environs. This will be especially pertinent if all AR ends up sharing a single platform.

  1. Parking up on private land

A business wouldn’t throw up yard sign on a family’s private lawn without asking, so why should AR be any different? Though the law may take a while to figure out its stance on AR, there are sentient ethical concerns that shouldn’t be ignored. Should a burger chain be able to augment a house where the residents are Hindu? Is it okay to transpose publicly available census information onto private houses? It seems as though private owners could have rights over the virtual space that surrounds their property.

  1. Precious anonymity

The free-to-download Blippar app already boasts how it can harness “powerful augmented reality, facial recognition, artificial intelligence and visual search technologies”, allowing us to use our phone cameras to unlock information about the world around us. At present, they encourage us to look-up gossip about famous faces spotted on TV or in images, but there is clearly the very real prospect of AR technology identifying individuals in the street. If this information can be cross-referenced with other available records, then AR could blow holes through personal anonymity in public places.

  1. Who should be able to augment?

Many distasteful things lurk on the internet, from extreme adult content to unpalatable political and religious views. At the moment, such sites are outside the direct concern of the general public who are rarely, if ever, exposed to their exotic material – but AR could change this. If many different AR platforms start to evolve, ordinary folks could find themselves, their homes, their neighborhoods, and their cities used as the backdrop for morally questionable material. Are we okay with any type of image augmenting a nursery or a church, so long as it is “only virtual”?

  1. Leading users by the nose

The Pokémon GO game has led to a number of high profile incidents, including the death of players. So much so, that there is a Pokémon GO Death Tracker which logs the details of each accident. Though it might be a stretch to hold game developers responsible for careless individuals and avoidable tragedies, to what extent should companies using AR be compelled to understand the environments they are augmenting (where their product is location specific)? Should they know if they’re leading users into dangerous neighborhoods, onto busy roads, or to places where the terrain is somehow unsafe?

  1. Real or faux?

Though we might be a little way off yet, if AR experiences become the norm we may see accusations of deception in cases where the real and the virtual aspects of the experience become indistinguishable. Should there be some way to indicate to users which parts of an AR experience are fake if it isn’t entirely clear? What if convincing or compelling augmentation leads to serious confusion amongst vulnerable members of society (e.g. children and the mentally disabled)?

We might be on the cusp of something newly useful and thrilling (imagine being able to uncover facts about the world around us just by pointing a phone camera!), but it’s important that those developing AR think through all of the implications for individuals and society before a virtual Pandora’s box springs open.

*Some of the ideas here are inspired by an excellent paper by the philosopher Erica Neely.