
Founder of YouTheData.com, Fiona J McEvoy, is interviewed for RE•WORK’s Women in AI Podcast: Episode #30.

Founder of YouTheData.com, Fiona J McEvoy, is interviewed for RE•WORK’s Women in AI Podcast: Episode #30.
What The Google Duplex Debate Tells Us
“As we march further into a world in which human-AI distinctions are blurred, we need to ask whether we are comfortable chasing this kind of dupe… Just how important is it that our conversational bots sound exactly like real humans?” Read more.
What Are Your Augmented Reality Property Rights?
“We were unprepared for many of the consequences of social media. Now is the time to address the many questions raised by the coming ubiquity of augmented reality.” Read more.
If you’d like to feature a contributor post on your blog or news site, please contact us here.

Cathy O’Neil’s now infamous book, Weapons of Math Destruction, talks about the pernicious feedback loop that can result from contentious “predictive policing” AI. She warns that the models at the heart of this technology can sometimes reflect damaging historical biases learned from police records that are used as training data.
For example, it is perfectly possible for a neighborhood to have a higher number of recorded arrests due to past aggressive or racist policing policies, rather than a particularly high instance of crime. But the unthinking algorithm doesn’t recognize this untold story and will blindly forge ahead, predicting the future will mirror the past and recommending the deployment more police to these “hotspot” areas.
Naturally, the police then make more arrests on these sites, and the net result is that the algorithm receives data that makes its association to grow even stronger.
It’s difficult to read, or even talk about technology at the moment without that word “ethics” creeping in. How will AI products affect users down-the-line? Can algorithmic decisions factor in the good of society? How might we reduce the number of fatal road collisions? What tools can we employ to prevent or solve all crime?

Now, let’s just make it clear from the off: these are all entirely honorable motives, and their proponents should be lauded. But sometimes even the drive toward an admiral aim – the prevention bad consequences – can ignore critical tensions that have been vexing thinkers for years.
Even if we agree that the consequences of an act are of real import, there are still other human values that can – and should – compete with them when we’re assimilating the best course of action. Continue reading

Jenny Morris – a disabled feminist and scholar – has argued that the term “disability” shouldn’t refer directly to a person’s impairment. Rather, it should be used to identify someone who is disadvantaged by the disabling external factors of a world designed by and for those without disabilities.
Her examples: “My impairment is the fact I can’t walk; my disability is the fact that the bus company only purchases inaccessible buses” or “My impairment is the fact that I can’t speak; my disability is the fact that you won’t take the time and trouble to learn how to communicate with me.”
According to Morris, any denial of opportunity is not simply a result of bodily limitations. It is also down to the attitudinal, social, and environmental barriers facing disabled people. Continue reading
We’re delighted to feature a guest post from Grainne Faller and Louise Holden of the Magna Carta For Data initiative.
The project was established in 2014 by the Insight Centre for Data Analytics – one of the largest data research centres in Europe – as a statement of its commitment to ethical data research within its labs, and the broader global movement to embed ethics in data science research and development.

A self-driving car is hurtling towards a group of people in the middle of a narrow bridge. Should it drive on, and hit the group? Or should it drive off the bridge, avoiding the group of people but almost certainly killing its passenger? Now, what about if there are three people on the bridge but five people in the car? Can you – should you – design algorithms that will change the way the car reacts depending on these situations?
This is just one of millions of ethical issues faced by researchers of artificial intelligence and big data every hour of every day around the world. Continue reading
It might not be a question you’re asking yourself right now, but according to a California-based developer of artificially intelligent sex robots, they will be soon be as popular as porn.

This is, at least, the hope of Matt McMullen. He’s the founder of RealDoll, a “love doll” company featured in the documentary, “The Sex Robots Are Coming”. The film seeks to convince its audience that combining undeniably lifelike dolls like Matt’s with interactive, artificially intelligent features will lead to an explosion in the market for robotic lovers.
But is this okay? Many say that it absolutely isn’t. Continue reading

It was reported this week that Twitter had stripped several far-right and white supremacist accounts of their blue “verification” badge. According to Twitter spokespeople, the badge – which was introduced to verify the authenticity of accounts belonging to high-profile individuals – had come to signify an implicit endorsement from the company. A sort of stamp of Twitter approval.
Now, it is understandable, if not laudable, to retract anything that so-much as hints at approval when it comes to such ignorant and warped individuals. But, it does also open a rather large bag of worms. Continue reading

Last week, I was reading an excellent Wired interview with Kate Crawford of AI Now, when a remark she made lodged itself into my head. It has been percolating there ever since, probably because the topic is a rather important one: if we’re talking about the social impact of tech, shouldn’t the conversation invite and include those with expertise outside of the field of technology?
“Of course!” you might chime in, “we should all have a say in what shapes our future!”. Agreed. And yet, despite noticeably more public conversation about the social impact and ethics of tech in recent months, it often feels as though many of the louder voices are of scientists and tech experts who are simply ‘turning their hand’ to the humanities. Continue reading

There are lots of emerging ideas about how virtual reality (VR) can be used for the betterment of society – whether it be inspiring social change, or training surgeons for delicate medical procedures.
Nevertheless, as with all new technologies, we should also be alive to any potential ethical concerns that could re-emerge as social problems further down the line. Here I list just a few issues that should undoubtedly be considered before we brazenly forge ahead in optimism.
1. Vulnerability
When we think of virtual reality, we automatically conjure images of clunky headsets covering the eyes – and often the ears – of users in order to create a fully immersive experience. There are also VR gloves, and a growing range of other accessories and attachments. Though the resultant feel might be hyper-realistic, we should also be concerned for people using these in the home – especially alone. Having limited access to sense data leaves users vulnerable to accidents, home invasions, and any other misfortunes that can come of being totally distracted.
2. Social isolation
There’s a lot of debate around whether VR is socially isolating. On the one hand, the whole experience takes place within a single user’s field-of-vision, which obviously excludes others from physically participating alongside them. On the other hand, developers like Facebook have been busy inventing communal meeting places like Spaces, which help VR users meet and interact in a virtual social environment. Though – as argued – the latter could be helpfully utilized by the introverted and lonely (e.g. seniors), there’s also a danger that it could become the lazy and dismissive way of dealing with these issues. At the other end of the spectrum, forums like Spaces may also end-up “detaching” users by leading them to neglect their real-world social connections. Whatever the case, studies show that real face-to-face interactions are a very important factor in maintaining good mental health. Substituting them with VR would be ill-advised.
3. Desensitization
It is a well-acknowledged danger that being thoroughly and regularly immersed in a virtual reality environment may lead some users to become desensitized in the real-world – particularly if the VR is one in which the user experiences or perpetrates extreme levels of violence. Desensitization means that the user may be unaffected (or less affected) by acts of violence, and could fail to show empathy as a result. Some say that this symptom is already reported amongst gamers who choose to play first person shooters or roleplay games with a high degree of immersion.
4. Overestimation of abilities
Akin to desensitization, is the problem of users overestimating their ability to perform virtual feats just as well in the real-world. This is especially applicable to children and young people who could take it that their expertise in tightrope walking, parkour, or car driving will transfer seamlessly over to non-virtual environments…
5. Psychiatric
There could also be more profound and dangerous psychological effects on some users (although clearly there are currently a lot of unknowns). Experts in neuroscience and the human mind have spoken of “depersonalization”, which can result in a user believing their own body is an avatar. There is also a pertinent worry that VR might be swift to expose psychiatric vulnerabilities in some users, and spark psychotic episodes. Needless to say, we must identify the psychological risks and symptoms ahead of market saturation, if that is an inevitability
6. Unpalatable fantasies
If there’s any industry getting excited about virtual reality, it’s the porn industry (predicted to be the third largest VR sector by 2025, after gaming and NFL-related content). The website Pornhub is already reporting that views of VR content are up 225% since it debuted in 2016. This obviously isn’t an ethical problem in and of itself, but it does become problematic if/when “unpalatable” fantasies become immersive. We have to ask: should there be limitations on uber realistic representations of aggressive, borderline-pedophilic, or other more perverse types of VR erotica? Or outside of the realm of porn, to what extent is it okay to make a game out of the events of 9/11, as is the case with the 08.46 simulator?
7. Torture/virtual criminality
There’s been some suggestion that VR headsets could be employed by the military as a kind of “ethical” alternative to regular interrogatory torture. Whether this is truth or rumor, it nevertheless establishes a critical need to understand the status of pain, damage, violence, and trauma inflicted by other users in a virtual environment – be it physical or psychological. At what point does virtual behavior constitute a real-world criminal act?
8. Manipulation
Attempts at corporate manipulation via flashy advertising tricks are not new, but up until now they’ve been 2-dimensional. As such, they’ve had to work hard compete with our distracted focus. Phones ringing, babies crying, traffic, conversations, music, noisy neighbors, interesting reads, and all the rest. With VR, commercial advertisers essentially have access to our entire surrounding environment (which some hold has the power to control our behavior). This ramps up revenue for developers, who now have (literally) whole new worlds of blank space upon which they can sell advertising. Commentators are already warning that this could lead to new and clever tactics involving product placement, brand integration and subliminal advertising.
9. Appropriate roaming and recreation
One of the most exciting selling points of VR is that it can let us roam the earth from the comfort of our own homes. This is obviously a laudable, liberating experience for those who are unable to travel. As with augmented reality, however, we probably need to have conversations about where it is appropriate to roam and/or recreate as a virtual experience. Is it fine for me to wander through a recreation of my favorite celebrity’s apartment (I can imagine many fans would adore the idea!)? Or peep through windows of homes and businesses in any given city street? The answers to some of these questions may seem obvious to us, but we cannot assume that the ethical parameters of this capability are clear to all who may use or develop.
10. Privacy and data
Last, but not least, the more we “merge” into a virtual world, the more of ourselves we are likely to give away. This might mean more and greater privacy worries. German researchers have raised the concern that if our online avatars mirror our real-world movements and gestures, these “motor intentions” and the “kinetic fingerprints” of our unique movement signatures can be tracked, read, and exploited by predatory entities. Again, it’s clear that there needs to be an open and consultative dialogue with regards to what is collectable, and what should be off-limits in terms of our virtual activities.
This list is not exhaustive, and some of these concerns will be proven groundless in good time. Regardless, as non-technicians and future users, we are right to demand full and clear explanations as to how these tripwires will be averted or mitigated by VR companies.