We’re delighted to feature a guest post from Grainne Faller and Louise Holden of the Magna Carta For Data initiative.
The project was established in 2014 by the Insight Centre for Data Analytics – one of the largest data research centres in Europe – as a statement of its commitment to ethical data research within its labs, and the broader global movement to embed ethics in data science research and development.
A self-driving car is hurtling towards a group of people in the middle of a narrow bridge. Should it drive on, and hit the group? Or should it drive off the bridge, avoiding the group of people but almost certainly killing its passenger? Now, what about if there are three people on the bridge but five people in the car? Can you – should you – design algorithms that will change the way the car reacts depending on these situations?
This is just one of millions of ethical issues faced by researchers of artificial intelligence and big data every hour of every day around the world. Continue reading
It might not be a question you’re asking yourself right now, but according to a California-based developer of artificially intelligent sex robots, they will be soon be as popular as porn.
This is, at least, the hope of Matt McMullen. He’s the founder of RealDoll, a “love doll” company featured in the documentary, “The Sex Robots Are Coming”. The film seeks to convince its audience that combining undeniably lifelike dolls like Matt’s with interactive, artificially intelligent features will lead to an explosion in the market for robotic lovers.
But is this okay? Many say that it absolutely isn’t. Continue reading
The beginnings of the internet seem so long ago to those of us who lived through them. Hours spent trawling through pre-Google search results, which often ranged from the useless to the bizarre. Blindly researching gifts and listening to music, sans intelligently selected recommendations. Checking social media accounts of our own volition, rather than through prompting from “notifications”.
Then the world began to change.
Under the banner of convenience, clever algorithms started to adapt both to our interests and – critically – the interests of commercial entities. We saw (or rather didn’t see) the covert introduction of the digital “nudges” that now regularly play upon our cognitive blind spots, and work to “guide” our decision-making. Continue reading
Writing for Quartz, international dispute lawyer, Jacob Turner, elaborates on the dangers of letting Silicon Valley execs set their own rules:
“We wouldn’t trust a doctor employed by a tobacco company. We wouldn’t let the automobile industry set vehicle-emissions limits. We wouldn’t want an arms maker to write the rules of warfare. But right now, we are letting tech companies shape the ethical development of AI.”
Read the whole article here: Letting Facebook control AI regulation is like letting the NRA control gun laws.
There’s a phrase – from where I don’t know – which says: “If you aren’t paying, you’re the product.” Never has this felt truer than in the context of social media. Particularly Facebook, with its fan-pages and features, games and gizmos, plus never-ending updates and improvements. Who is paying for this, if not you…and what are they getting in return? The answer is actually quite straightforward.
Not for the first time, Apple CEO Tim Cook has spoken out this week about how important it is for children to learn computer code. He’s not alone in believing that this “language of the future” will be critical for kids growing up right now. In a sea of unknowns one thing appears to be certain: technical understanding is a very valuable asset indeed.
It’s interesting then, that in spite of remarkable efforts to equip the adults of tomorrow with such skills, very little is being done to familiarize young adults, middle-aged parents, or retirees (with impressively long-life expectancies!) with the signature terms of the “AI Age”. This seems a like an oversight. Continue reading