The following is a guest post by Erin Green, PhD, a Brussels-based AI ethics and public engagement specialist. For more on the European scene, check out my recent interview with Hill + Knowlton Strategies “Creating Ethical Rules for AI.”
When it comes to the global AI stage, China and the US consistently grab headlines as their so-called arms race heats up, while countries like Japan and South Korea lead the way in innovation and social receptivity. Europe, though, is taking a slightly different approach – partly by choice, partly by design.
The 28 countries (Brexit pending) that make up the economic and political bloc of the European Union each have a stake in the AI game. Bigger, richer players like the UK (pledging 1000 places for PhDs in AI) and Germany (€3 billion invested in the coming years) are sinking eye-widening resources into keeping up with the proverbial Joneses. Smaller nations, like Malta and its not-quite 500,000 people, are turning to foreign investment and partnerships to guarantee a spot in the major leagues.
Somewhat independent of these interests, the EU itself is trying to carve out space in terms of regulatory prowess and in bringing coherence to a rather chaotic European AI scene. Think this is a bureaucratic exercise with not much reach or consequence beyond the Berlaymont? Just remember all those GDPR emails that clogged up your inbox sometime around May 25, 2018. The EU has real regulatory reach.
This article was originally written for the RE•WORK guest blog. This week YouTheData.com founder, Fiona McEvoy, will speak on a panel at the San Francisco Summit.
The world is changing, and that change is being driven by new and emerging technologies. They are evolving the way we behave in our homes, work spaces, public places, vehicles, and with respect to our bodies, pastimes, and associates. All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.
As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?
YouTheData.com is delighted to feature a guest post by John Gray, the co-founder of MentionMapp Analytics.
Love them or can’t stand them, cats and memes have clawed their way into our cultures. Undoubtedly there’s a hieroglyphic cat meme etched on a wall somewhere in the historical ruins of Egypt. Believing otherwise, is to suggest that ancient peoples were humorless. Amusement, cats and memes aren’t new cultural considerations, just like today’s misinformation problem – popularized as “fake news” – isn’t either.
As William Faulkner said: “The past is never dead. It’s not even past.” We can’t escape the history of information and communication technologies, but we can choose to blithely ignore it’s evolution and the subsequent cultural, social, and political impact. Continue reading
The Cottingley Fairies
As humans, we are accustomed to suspending our disbelief. Indeed, we’re known to indulge in it. Each time we dive into a book, a movie, a video game, a TV show – even a spiritual flight-of-fancy – most of us are willing and able to disengage from the pedantry of our everyday judgment, and allow ourselves to be convinced by things that are less-than-absolutely-convincing…
This coaxing is a consensual arrangement. I allow you to present me with the improbable on the proviso that it is entertaining, or educational, or uplifting, or philosophical – i.e. my pay-off is that I am emotionally stimulated in some way. I don’t need to scrutinize a movie in its every detail, what is important when I watch it is that I enjoy it and it makes me happy (or scared, or angry, or sentimental!). Continue reading
Writing for Quartz, international dispute lawyer, Jacob Turner, elaborates on the dangers of letting Silicon Valley execs set their own rules:
“We wouldn’t trust a doctor employed by a tobacco company. We wouldn’t let the automobile industry set vehicle-emissions limits. We wouldn’t want an arms maker to write the rules of warfare. But right now, we are letting tech companies shape the ethical development of AI.”
Read the whole article here: Letting Facebook control AI regulation is like letting the NRA control gun laws.
We’ve all seen the stories and allegations of Russian bots manipulating the Trump-Clinton US election and, most recently, the FCC debate on net neutrality. Yet far from such high stakes arenas, there’s good reason to believe these automated pests are also contaminating data used by firms and governments to understand who we (the humans) are, as well as what we like and need with regard to a broad range of things… Continue reading
Artificial Intelligence is becoming ever more sophisticated in its deductions. This has caused many to consider its role in the governance of countries, states, cities, and towns. I believe there’s a strong case to make when it comes to its integration into politics and power. Here’s why: Continue reading