This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.
The way we interact with technology keeps changing. Of late, many more of us are using speech and gesture to give instructions to our devices, and it’s actually starting to feel natural. We tell Alexa to turn the lights off, silence our smart watches by smothering them with our palms, and unlock our phones with a look. For this to work as seamlessly as it does, our devices have to pensively watch and listen to us. Pretty soon they could begin to understand and anticipate our emotional needs, too.
The move towards what’s been called implicit understanding – in contrast with explicit interaction – will be facilitated by technologies like emotion-tracking AI. Technology that uses cues from our vocal tone, facial expressions and other micro-movements to determine our mood and, from there, our needs. According to researchers at Gartner, very soon our fridge will be able to suggest food to match our feelings, and research VP Annette Zimmerman has even claimed that, “By 2022, your personal device will know more about your emotional state than your own family.”
This article was originally written for the RE•WORK guest blog. This week YouTheData.com founder, Fiona McEvoy, will speak on a panel at the San Francisco Summit.
The world is changing, and that change is being driven by new and emerging technologies. They are evolving the way we behave in our homes, work spaces, public places, vehicles, and with respect to our bodies, pastimes, and associates. All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.
As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?
YouTheData.com is delighted to feature a guest post by John Gray, the co-founder of MentionMapp Analytics. John is a media researcher and entrepreneur exploring how issues like the spread of misinformation, and the exploitation of personal privacy are eroding trust in our social institutions and discourse. He’s written numerous case studies and has co-authored “The Ecosystem of Fake: Bots, Information and Distorted Realities.”
“It’s the bad people with bad intent that’s causing the problem, not technology” – Shane Luke, Sr. Director of Digital Innovation, Nike
We exude data, like the sweat that streams off our skin. It’s the norm. Just as another new normal is the news of the latest PR tour by data breach apologists full like empty promises of “we’ll do better”. Like the soles of an ultra-marathoners shoes, the cliched technocratic mind-set of “moving fast, breaking things” and “asking for forgiveness rather than permission”, is beginning to wear thin.
We accept the devices in our pockets, and on our wrists, feet, and even our faces are communicating data. Yet the data they produce becomes a target for bad-actors. As technology weaves deeper into what we wear, there’s more to our fashion statements than meets the eye.
Read You The Data @ All Turtles
What The Google Duplex Debate Tells Us
“As we march further into a world in which human-AI distinctions are blurred, we need to ask whether we are comfortable chasing this kind of dupe… Just how important is it that our conversational bots sound exactly like real humans?” Read more.
Read You The Data @ Slate
What Are Your Augmented Reality Property Rights?
“We were unprepared for many of the consequences of social media. Now is the time to address the many questions raised by the coming ubiquity of augmented reality.” Read more.
If you’d like to feature a contributor post on your blog or news site, please contact us here.
Cathy O’Neil’s now infamous book, Weapons of Math Destruction, talks about the pernicious feedback loop that can result from contentious “predictive policing” AI. She warns that the models at the heart of this technology can sometimes reflect damaging historical biases learned from police records that are used as training data.
For example, it is perfectly possible for a neighborhood to have a higher number of recorded arrests due to past aggressive or racist policing policies, rather than a particularly high instance of crime. But the unthinking algorithm doesn’t recognize this untold story and will blindly forge ahead, predicting the future will mirror the past and recommending the deployment more police to these “hotspot” areas.
Naturally, the police then make more arrests on these sites, and the net result is that the algorithm receives data that makes its association to grow even stronger.
It may not seem like it, but there is quite an arms race going on when it comes to interactive AI and virtual assistants. Every tech company wants their offering to be more intuitive…more human. Yet although they’re improving, voice activated tech like Alexa and Siri are still pretty clunky, and often underwhelming in their interactions.
This obviously isn’t great if developers want to see them entering the workplace in such a way as to supercharge sales. Continue reading
The Cambridge Analytica scandal is still reverberating in the media, garnering almost as much daily coverage as when the story broke in The New York Times on March 17. Facebook’s mishandling of user data has catalyzed a collective public reaction of disgust and indignation, and perhaps the most prominent public manifestation of this is the #DeleteFacebook movement. This vocal campaign is urging us to do exactly what it says: To vote with our feet. To boycott. To not just deactivate our Facebook accounts, but to eliminate them entirely. Continue reading
Predictive, data-driven software is becoming ubiquitous, and as such our reliance upon it is steadily intensifying. The locus of knowledge is becoming external to us again for the first time since the onset of humanism in the 18th century, and we increasingly prefer the forecasts of artificially intelligent systems to our own experience or intuition.
Of all the arenas in which these predictions fascinate and compel our decision-making, perhaps the most prevalent are those that see algorithms foretell the behaviors of our fellow human beings. What they prefer, what they react to, where they go, who they’ll flirt with, whether they’re likely pay back a loan, or even commit a crime.
Quite simply, we are coming to believe that machines know us better than we can know ourselves. Continue reading
The rise and rise of tech, and the popularity of shows like Altered Carbon, is placing the idea of augmented humanity front-and-center. So-called “body hacking” is already popular enough to have its own annual convention, and well-respected AI pioneers like Siri inventor Tom Gruber have been evangelizing about technology that can, and will, be used to help humans achieve superhuman levels of cognitive function. Giving a TED Talk last year, Gruber asked: Continue reading
YouTheData.com is delighted to feature a guest post by John Gray, the co-founder of MentionMapp Analytics.
Love them or can’t stand them, cats and memes have clawed their way into our cultures. Undoubtedly there’s a hieroglyphic cat meme etched on a wall somewhere in the historical ruins of Egypt. Believing otherwise, is to suggest that ancient peoples were humorless. Amusement, cats and memes aren’t new cultural considerations, just like today’s misinformation problem – popularized as “fake news” – isn’t either.
As William Faulkner said: “The past is never dead. It’s not even past.” We can’t escape the history of information and communication technologies, but we can choose to blithely ignore it’s evolution and the subsequent cultural, social, and political impact. Continue reading