YouTheData.com is delighted to feature a guest post by John Gray, the co-founder of MentionMapp Analytics. John is a media researcher and entrepreneur exploring how issues like the spread of misinformation, and the exploitation of personal privacy are eroding trust in our social institutions and discourse. He’s written numerous case studies and has co-authored “The Ecosystem of Fake: Bots, Information and Distorted Realities.”
“It’s the bad people with bad intent that’s causing the problem, not technology” – Shane Luke, Sr. Director of Digital Innovation, Nike
We exude data, like the sweat that streams off our skin. It’s the norm. Just as another new normal is the news of the latest PR tour by data breach apologists full like empty promises of “we’ll do better”. Like the soles of an ultra-marathoners shoes, the cliched technocratic mind-set of “moving fast, breaking things” and “asking for forgiveness rather than permission”, is beginning to wear thin.
We accept the devices in our pockets, and on our wrists, feet, and even our faces are communicating data. Yet the data they produce becomes a target for bad-actors. As technology weaves deeper into what we wear, there’s more to our fashion statements than meets the eye.
Fraudsters typically line their pockets by forging our signatures, cloning our credit cards, and stealing our personal identities. Yet, we’d like to think that folks who know us personally – our family, friends, colleagues, and acquaintances – would catch these counterfeiters out if they brazenly claimed to be us in public. After all, seeing is believing isn’t it? If you don’t look like me, you’re not me. If you do look like me, the chances are that you are me. Right?
Well…maybe. And this could soon become the subject of some confusion.
Well, imagine if stealing your identity could include stealing your image. And if scammers could then use that image to put words in your mouth and – in some cases – fake your very actions. This isn’t just some outlandish thought experiment, but a foreseeable hazard if we fail to prepare for a surge in the production of “deepfakes”. Continue reading
We’re delighted to feature a guest post from Grainne Faller and Louise Holden of the Magna Carta For Data initiative.
The project was established in 2014 by the Insight Centre for Data Analytics – one of the largest data research centres in Europe – as a statement of its commitment to ethical data research within its labs, and the broader global movement to embed ethics in data science research and development.
A self-driving car is hurtling towards a group of people in the middle of a narrow bridge. Should it drive on, and hit the group? Or should it drive off the bridge, avoiding the group of people but almost certainly killing its passenger? Now, what about if there are three people on the bridge but five people in the car? Can you – should you – design algorithms that will change the way the car reacts depending on these situations?
This is just one of millions of ethical issues faced by researchers of artificial intelligence and big data every hour of every day around the world. Continue reading
There’s a phrase – from where I don’t know – which says: “If you aren’t paying, you’re the product.” Never has this felt truer than in the context of social media. Particularly Facebook, with its fan-pages and features, games and gizmos, plus never-ending updates and improvements. Who is paying for this, if not you…and what are they getting in return? The answer is actually quite straightforward.