AI needs cooperation, not an arms race

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

drone-camera-isolated-background-helicopter-technology-1446057-pxhere.com.jpg

Writing in the New York Times recently, venture capitalist Kai-Fu Lee signaled an important, oncoming change in the way we think about artificial intelligence. We are graduating, he cautioned, from an age of discovery and vision into a more practical era of implementation.

Lee is promoting his new book, titled A.I. Superpowers: China, Silicon Valley, and the New World Order, and he suggests that this transition from lab to launchpad may naturally privilege Chinese advantages—like data abundance and government investment—above the research capabilities and “freewheeling intellectual environment” of the U.S.

Continue reading

If internet trolls are cybercriminals, can AI stop them?

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Troll

In May 2017, the WannaCry ransomware attack made international headlines. The breach (which was later linked to North Korea) used leaked NSA tools to target businesses that were running outdated Windows software. WannaCry wreaked havoc by encrypting user data and then demanding  Bitcoin ransom payments. Hackers gave victims 7 days to pay, threatening to delete the files of those who wouldn’t comply.

Though a “kill switch” was ultimately discovered, the attack affected over 200,000 business in 150 countries. It has been estimated that WannaCry caused hundreds of millions – and perhaps even billions – of dollars of damage.

Despite the alarm and headlines associated with it, the WannaCry attack was neither unique nor especially surprising. In today’s connected world we have almost become accustomed to these types of hostile acts. Yahoo. Equifax. Ashley Madison. The list goes on.  Technology has catalyzed big changes to our conception crime, and while the word still attaches itself to physical infringements like theft and assault, “crime” now captures a broad range of clandestine activities, including so-called cybercrimes.

Continue reading

Good Gadgets: The rise of socially conscious tech

robot-1214536_1280

From algorithmic bias to killer robots, fake news, and the now almost daily prophesying about the dangers of AI, it’s fair to say that tech is under scrutiny.

Episodes like the Cambridge Analytica scandal opened our eyes to the fact that some of our nearest and dearest technologies had become fully socialized before we truly understood the full force of their influence. Consequently, new tools and gadgets coming down the line are being closely examined so that we can begin to uncover any damaging consequences that could manifest 10, 20, or even 100 years from now.

Continue reading

Peer pressure: An unintended consequence of AI

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Peer pressure

Last winter, Kylie Jenner tweeted that she stopped using Snapchat, and almost immediately the company’s shares dropped six-percent, losing $1.3 billion in value. Her seemingly innocent comments had led investors to believe that the 20-year-old’s 25 million followers would do the same, and the knock-on effect would seal the social media apps fate as a “has been” among its key demographic of younger women.

This astonishing event demonstrates in technicolor how the notion of influence is evolving, latterly taking on a new significance. In the age of technology, though influence is still associated with power, it is no longer the limited reserve of “the Powerful”—i.e. those in recognized positions of authority, like bankers, lawyers, or politicians.

Continue reading

Responsibility & AI: ‘We All Have A Role When It Comes To Shaping The Future’

This article was originally written for the RE•WORK guest blog. This week YouTheData.com founder, Fiona McEvoy, will speak on a panel at the San Francisco Summit

artificial-intelligence-698154_1280

The world is changing, and that change is being driven by new and emerging technologies. They are evolving the way we behave in our homes, work spaces, public places, vehicles, and with respect to our bodies, pastimes, and associates. All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.

As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?

Continue reading

Designing for Bad Intentions: Wearables and Cyber Risks

YouTheData.com is delighted to feature a guest post by John Gray, the co-founder of MentionMapp Analytics. John is a media researcher and entrepreneur exploring how issues like the spread of misinformation, and the exploitation of personal privacy are eroding trust in our social institutions and discourse. He’s written numerous case studies and has co-authored “The Ecosystem of Fake: Bots, Information and Distorted Realities.” 

Wearable.jpg

It’s the bad people with bad intent that’s causing the problem, not technology” – Shane Luke, Sr. Director of Digital Innovation, Nike

We exude data, like the sweat that streams off our skin. It’s the norm. Just as another new normal is the news of the latest PR tour by data breach apologists full like empty promises of we’ll do better”. Like the soles of an ultra-marathoners shoes, the cliched technocratic mind-set of “moving fast, breaking things” and “asking for forgiveness rather than permission”, is beginning to wear thin.

We accept the devices in our pockets, and on our wrists, feet, and even our faces are communicating data. Yet the data they produce becomes a target for bad-actors. As technology weaves deeper into what we wear, there’s more to our fashion statements than meets the eye.

Continue reading

The Negative Feedback Loop: Technology Needs To Know When It Gets Things Wrong

Feedback loop

Cathy O’Neil’s now infamous book, Weapons of Math Destruction, talks about the pernicious feedback loop that can result from contentious “predictive policing” AI. She warns that the models at the heart of this technology can sometimes reflect damaging historical biases learned from police records that are used as training data.

For example, it is perfectly possible for a neighborhood to have a higher number of recorded arrests due to past aggressive or racist policing policies, rather than a particularly high instance of crime. But the unthinking algorithm doesn’t recognize this untold story and will blindly forge ahead, predicting the future will mirror the past and recommending the deployment more police to these “hotspot” areas.

Naturally, the police then make more arrests on these sites, and the net result is that the algorithm receives data that makes its association to grow even stronger.

Continue reading

Why Can’t We #DeleteFacebook?: 4 Reasons We’re Reluctant

Facebook Addiction

The Cambridge Analytica scandal is still reverberating in the media, garnering almost as much daily coverage as when the story broke in The New York Times on March 17. Facebook’s mishandling of user data has catalyzed a collective public reaction of disgust and indignation, and perhaps the most prominent public manifestation of this is the #DeleteFacebook movement. This vocal campaign is urging us to do exactly what it says: To vote with our feet. To boycott. To not just deactivate our Facebook accounts, but to eliminate them entirely. Continue reading

The Eyes Have It: Three Reasons to be Cautious About Emotion-Tracking Recruitment AI

facial recognition

Predictive, data-driven software is becoming ubiquitous, and as such our reliance upon it is steadily intensifying. The locus of knowledge is becoming external to us again for the first time since the onset of humanism in the 18th century, and we increasingly prefer the forecasts of artificially intelligent systems to our own experience or intuition.

Of all the arenas in which these predictions fascinate and compel our decision-making, perhaps the most prevalent are those that see algorithms foretell the behaviors of our fellow human beings. What they prefer, what they react to, where they go, who they’ll flirt with, whether they’re likely pay back a loan, or even commit a crime.

Quite simply, we are coming to believe that machines know us better than we can know ourselves.  Continue reading