We’re delighted to feature a guest post from Grainne Faller and Louise Holden of the Magna Carta For Data initiative.
The project was established in 2014 by the Insight Centre for Data Analytics – one of the largest data research centres in Europe – as a statement of its commitment to ethical data research within its labs, and the broader global movement to embed ethics in data science research and development.
A self-driving car is hurtling towards a group of people in the middle of a narrow bridge. Should it drive on, and hit the group? Or should it drive off the bridge, avoiding the group of people but almost certainly killing its passenger? Now, what about if there are three people on the bridge but five people in the car? Can you – should you – design algorithms that will change the way the car reacts depending on these situations?
This is just one of millions of ethical issues faced by researchers of artificial intelligence and big data every hour of every day around the world.
And it’s not just researchers. Say you’re a parent and your child’s school is taking part in a national data project to track the health status of children. You believe the project is valuable and give consent for your child’s lifestyle and health information to be collected. However, a few years later, your child reaches 18 and doesn’t want her early health and lifestyle profile to be available to researchers. How do we sort that issue out? Withdrawing the data might compromise the project which will benefit many, consent has been given – can your child withdraw that consent?
These are issues that need discussion and examination. Politicians, philosophers, ethicists, lawyers, human rights experts, technology designers and a multitude of others all agree that we need to be aware of these issues, that we need to protect human values in the age of artificial intelligence, and yet, our thinking is not keeping up with the rate at which these technologies are being developed.
How do you plan for the future when you don’t know what questions and issues are coming down the line? How can you put a framework in place when something unprecedented might be around the corner?
These are the questions that are paralysing the ethics conversation around the world. It’s all too complicated and too complex and yet, we can’t allow this to get away from us any more than it has already. So where on earth do we start?
Well perhaps we should start with the ethical questions that already exist? It sounds simplistic, but very little of this examination is happening. The problem is the people who develop the technology and the people who develop ethical thought and theory tend to live in different academic worlds. They speak different languages. So how do we bring them closer together?
We should be asking researchers working in AI and Big Data around the world about the ethical issues they face in the course of their work. We should then make those questions available to the people whose job it is to find solutions to ethical problems.
We have been looking at AI and big data research from the top of a mountain, trying to catch it all in a big ethics framework. We need to keep doing that, but we also need to get down on the ground and find out what ethical issues researchers are dealing with today. If we know what issues are arising today, we will be more prepared for the unknown down the line.
A new website www.magnacartafordata.org is attempting to do exactly this. It is gathering real life case studies and experiences of ethical issues from data and AI researchers and making them publicly available for the general public and researchers from different disciplines to read over.
The issues that arise for individuals tend to be less dramatic than that of the self-driving car conundrum.
Take researchers who use Twitter data, for example. To open a Twitter account, you have to agree to the Twitter terms and conditions. Did you know that by doing that you have consented to your data being used for research purposes? While researchers are allowed to use your data, a lot of them feel slightly uncomfortable about it because they don’t believe that ticking a box at the end of a document nobody reads is proper consent. This is an issue that comes up pretty frequently.
Or how about someone who collects social media data to track interactions that happen between communities of people who have, say, an eating disorder? The researcher could have great intentions of helping these communities, but could a health insurance company take that same dataset and use it to determine who may have had bulimia as a teenager? The unintended, negative consequences of big data and AI research is another issue that weighs on the minds of researchers.
By gathering these sorts of questions, we are learning about the details and subtleties of the current ethics landscape. We are providing a place for researchers to air these concerns, and we are making those concerns available to experts who can help.
Interestingly, we are finding that researchers are beginning to talk directly with ethicists, and come up with solutions to ethical issues themselves. We’re essentially crowdsourcing solutions to ethical problems.
It’s not the whole solution, but it’s certainly part of it and we’re excited to see what the future brings.
Follow the Magna Carta for Data project on @DataEthicsIre.