From algorithmic bias to killer robots, fake news, and the now almost daily prophesying about the dangers of AI, it’s fair to say that tech is under scrutiny.
Episodes like the Cambridge Analytica scandal opened our eyes to the fact that some of our nearest and dearest technologies had become fully socialized before we truly understood the full force of their influence. Consequently, new tools and gadgets coming down the line are being closely examined so that we can begin to uncover any damaging consequences that could manifest 10, 20, or even 100 years from now.
This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.
The way we interact with technology keeps changing. Of late, many more of us are using speech and gesture to give instructions to our devices, and it’s actually starting to feel natural. We tell Alexa to turn the lights off, silence our smart watches by smothering them with our palms, and unlock our phones with a look. For this to work as seamlessly as it does, our devices have to pensively watch and listen to us. Pretty soon they could begin to understand and anticipate our emotional needs, too.
The move towards what’s been called implicit understanding – in contrast with explicit interaction – will be facilitated by technologies like emotion-tracking AI. Technology that uses cues from our vocal tone, facial expressions and other micro-movements to determine our mood and, from there, our needs. According to researchers at Gartner, very soon our fridge will be able to suggest food to match our feelings, and research VP Annette Zimmerman has even claimed that, “By 2022, your personal device will know more about your emotional state than your own family.”
YouTheData.com is delighted to feature a guest post by John Gray, the co-founder of MentionMapp Analytics. John is a media researcher and entrepreneur exploring how issues like the spread of misinformation, and the exploitation of personal privacy are eroding trust in our social institutions and discourse. He’s written numerous case studies and has co-authored “The Ecosystem of Fake: Bots, Information and Distorted Realities.”
“It’s the bad people with bad intent that’s causing the problem, not technology” – Shane Luke, Sr. Director of Digital Innovation, Nike
We exude data, like the sweat that streams off our skin. It’s the norm. Just as another new normal is the news of the latest PR tour by data breach apologists full like empty promises of “we’ll do better”. Like the soles of an ultra-marathoners shoes, the cliched technocratic mind-set of “moving fast, breaking things” and “asking for forgiveness rather than permission”, is beginning to wear thin.
We accept the devices in our pockets, and on our wrists, feet, and even our faces are communicating data. Yet the data they produce becomes a target for bad-actors. As technology weaves deeper into what we wear, there’s more to our fashion statements than meets the eye.
Cathy O’Neil’s now infamous book, Weapons of Math Destruction, talks about the pernicious feedback loop that can result from contentious “predictive policing” AI. She warns that the models at the heart of this technology can sometimes reflect damaging historical biases learned from police records that are used as training data.
For example, it is perfectly possible for a neighborhood to have a higher number of recorded arrests due to past aggressive or racist policing policies, rather than a particularly high instance of crime. But the unthinking algorithm doesn’t recognize this untold story and will blindly forge ahead, predicting the future will mirror the past and recommending the deployment more police to these “hotspot” areas.
Naturally, the police then make more arrests on these sites, and the net result is that the algorithm receives data that makes its association to grow even stronger.
It may not seem like it, but there is quite an arms race going on when it comes to interactive AI and virtual assistants. Every tech company wants their offering to be more intuitive…more human. Yet although they’re improving, voice activated tech like Alexa and Siri are still pretty clunky, and often underwhelming in their interactions.
This obviously isn’t great if developers want to see them entering the workplace in such a way as to supercharge sales. Continue reading
Jenny Morris – a disabled feminist and scholar – has argued that the term “disability” shouldn’t refer directly to a person’s impairment. Rather, it should be used to identify someone who is disadvantaged by the disabling external factors of a world designed by and for those without disabilities.
Her examples: “My impairment is the fact I can’t walk; my disability is the fact that the bus company only purchases inaccessible buses” or “My impairment is the fact that I can’t speak; my disability is the fact that you won’t take the time and trouble to learn how to communicate with me.”
According to Morris, any denial of opportunity is not simply a result of bodily limitations. It is also down to the attitudinal, social, and environmental barriers facing disabled people. Continue reading