How Do We Solve A Problem Like Election Prediction?

On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction? 

At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think. 

Continue reading

Healthbots: the new caregivers

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

healthbot.png

Movie tickets bought, travel booked, customer service problems resolved. Chatbots perform so many tasks that the best ones blend into the background of everyday transactions and are often overlooked. They’re being adopted seamlessly by one industry after the next, but their next widespread application poses unique challenges.

Now healthbots are poised to become the new frontline for triage, replacing human medical professionals as the first point of contact for the sick and the injured.

Continue reading

Good Gadgets: The rise of socially conscious tech

robot-1214536_1280

From algorithmic bias to killer robots, fake news, and the now almost daily prophesying about the dangers of AI, it’s fair to say that tech is under scrutiny.

Episodes like the Cambridge Analytica scandal opened our eyes to the fact that some of our nearest and dearest technologies had become fully socialized before we truly understood the full force of their influence. Consequently, new tools and gadgets coming down the line are being closely examined so that we can begin to uncover any damaging consequences that could manifest 10, 20, or even 100 years from now.

Continue reading

AI and the future shape of product design

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Design.jpg

These days we talk so much about artificial intelligence and its creators that it’s easy to overlook the increasingly prolific role AI itself is playing in product creation and design. Across different industries, the technical and the creative are being drawn closely together to create a range of products that may otherwise never have been conceived.

Blowing past the wind tunnel

Take, for example, the new aerodynamic bicycle presented this month at the International Conference on Machine Learning, which was designed using Neural Concept software. By employing AI in the design phase, a small team from French college IUT Annecy were able to completely bypass the usual methods of testing for aerodynamism – a process that usually requires a great deal of time and computing power.

Continue reading

The Eyes Have It: Three Reasons to be Cautious About Emotion-Tracking Recruitment AI

facial recognition

Predictive, data-driven software is becoming ubiquitous, and as such our reliance upon it is steadily intensifying. The locus of knowledge is becoming external to us again for the first time since the onset of humanism in the 18th century, and we increasingly prefer the forecasts of artificially intelligent systems to our own experience or intuition.

Of all the arenas in which these predictions fascinate and compel our decision-making, perhaps the most prevalent are those that see algorithms foretell the behaviors of our fellow human beings. What they prefer, what they react to, where they go, who they’ll flirt with, whether they’re likely pay back a loan, or even commit a crime.

Quite simply, we are coming to believe that machines know us better than we can know ourselves.  Continue reading

In the future, we could solve all crime. But at what cost?

It’s difficult to read, or even talk about technology at the moment without that word “ethics” creeping in. How will AI products affect users down-the-line? Can algorithmic decisions factor in the good of society? How might we reduce the number of fatal road collisions? What tools can we employ to prevent or solve all crime?

surveillance

Now, let’s just make it clear from the off: these are all entirely honorable motives, and their proponents should be lauded. But sometimes even the drive toward an admiral aim – the prevention bad consequences – can ignore critical tensions that have been vexing thinkers for years.

Even if we agree that the consequences of an act are of real import, there are still other human values that can – and should – compete with them when we’re assimilating the best course of action. Continue reading

Want Artificial Intelligence that cares about people? Ethical thinking needs to start with the researchers

We’re delighted to feature a guest post from Grainne Faller and Louise Holden of the Magna Carta For Data initiative.

The project was established in 2014 by the Insight Centre for Data Analytics  – one of the largest data research centres in Europe – as a statement of its commitment to ethical data research within its labs, and the broader global movement to embed ethics in data science research and development.

Magna Carta For Data 1

A self-driving car is hurtling towards a group of people in the middle of a narrow bridge. Should it drive on, and hit the group? Or should it drive off the bridge, avoiding the group of people but almost certainly killing its passenger? Now, what about if there are three people on the bridge but five people in the car? Can you – should you – design algorithms that will change the way the car reacts depending on these situations?

This is just one of millions of ethical issues faced by researchers of artificial intelligence and big data every hour of every day around the world. Continue reading

Facebook wants you naked…and it’s for your own good

revenge porn

***UPDATE: Contrary to yesterday’s reporting, the BBC has now corrected its article on Facebook’s new “revenge porn” AI to include this rather critical detail:

“Humans rather than algorithms will view the naked images voluntarily sent to Facebook in a scheme being trialled in Australia to combat revenge porn. The BBC understands that members of Facebook’s community operations team will look at the images in order to make a “fingerprint” of them to prevent them being uploaded again.”

So now young victims will have the choice of mass humiliation, or faceless scrutiny… Continue reading

What if Twitter could help predict a death?

I want to use this blog to look at how data and emerging technologies affect us – or more precisely YOU. As a tech ethics researcher, I’m perpetually reading articles and reports that detail the multitude of ways in which data can be used to anticipate bad societal outcomes: criminality, abuse, corruption, disease, mental health, etc etc. Some of these get oxygen, some of them don’t. Some of them have integrity, some don’t. Often these tests, analyses, and studies identify problems that gesture toward ethically “interesting” solutions.

Just today this article caught my attention. It details a Canadian study that tries to get to grips with an endemic problem: suicide in young people. Just north of the border, suicide causes no fewer than 24% of deaths amongst those aged between 15 and 24 (Canadian Mental Health Association). Clearly, this is not a trivial issue.

In response, a group of researchers have tried to determine the signs of self-harm and suicide by studying the social media posts of those in the most vulnerable age bracket. The team – from SAS Canada – have even speculated that, “these new sources could provide early indication of possible trends to guide more formal surveillance activities.” So, with the prospect of officialdom being dangled before us, it’s important to ask how this social media analysis works. In short, might any one of us land-up being surveilled as a suicide risk if we happen to make a trigger comment or two on Twitter?

Well the answer seems to be “possibly”. This work harvested 2.3 million tweets, of which 1.1 million were identified as “likely to have been authored by 13 to 17-year-olds in Canada”. This determination was made by a machine learning model that has been trained to predict age by relying on the way young people use language. So, if the algorithm thinks you tweet like a teenager, you’re potentially on the hook. From there, the team looked for where these tweets related to depression and suicide, and “picked some specific buzzwords and created topics around them, and our software mined those tweets to collect the people.”social media

Putting aside the undoubtedly harrowing idea of people collection, it’s important to highlight the usefulness of this survey. The data scientists involved insist that the data they’ve collected can help them narrow down the Canadian regions which have a problem (although one might contest that the suicide statistics themselves should reveal this), and/or identify a particular school or a time of year in which the tell-tale signs are more widespread or stronger. This in turn can help better target campaigns and resources, which – of course – is laudable, particularly if it is an improvement on existing suicide statistics. It only starts to get ethically icky once we consider what further steps might be taken.

The technicians on the project speculate as to how this data might be used in the future. Remember, we are not dealing with anonymized surveys here, but real teen voices “out in the wild”: “He (data expert Jos Polfliet) envisions the solution being used to find not only at-risk teens, but others too, like first responders and veterans who may be considering suicide.”

Eh? Find them? Does that mean it might be used to actually locate real people based on what they’ve tweeted on their personal time? As with many well-meaning data projects, everything suddenly begins to feel a little Minority Report at this point. Although this study is quite obviously well-intentioned, we are fooling ourselves if we don’t acknowledge the levels of imprecision we’re dealing with here.

Firstly, without revealing the actual identities of every account holder picked-out by the machine learning, we have no way of knowing the levels of accuracy these researchers have hit upon when it comes to monitoring 13-17 year-olds. Although the use of certain language and terminologies might be a good proxy for the age of the user, it certainly isn’t an infallible one in the wacky world of the internet.

Secondly, the same is true of suicide and depression-related buzzwords. Using a word or phrase typically associated with teen suicide is not a sufficient condition for a propensity towards suicide (indeed, it is unlikely to even be a necessary condition). As Seth Stephens-Davidowitz discussed in his new book Everybody Lies: Big Data, New Data, And What the Internet Can Tell Us About Who We Really Are, in 2014 research found that there were 6,000 Google searches for the exact “how to kill your girlfriend” and yet there were “only” 400 murders of girlfriends. In other words, not everyone who vents on the internet is in earnest, and many who are earnest in their intentions may not surface on the internet at all. So, in short, we don’t know exactly what we’ve got when we look at these tweets.

Lastly, without having read the full methodology, it appears that these suicide buzzwords were hand-picked by the team. In other words, they were selected by human beings, presumably based on what sorts of things they deemed suicidal teens might tweet. Fair enough, but not particularly scientific. In fact, this sort of process can be riddled with guesswork and human bias. How could you possibly know with any certainty, even if instructed by a physician or psychiatrist, exactly which kinds of words of phrases denote true intention and which denote teenage angst?

Hang on a second – you might protest – these buzzwords may have been chosen by a very clever, objective algorithm? Yet, even if a clever algorithm could somehow ascertain the difference between a “I hate my life” tweeted by a genuinely suicidal teen and a “I hate my life” tweeted by a tired and hormonal teenager (perhaps based on whatever language it was couched in), to make this call it would have to have been trained on data which used the tweets of teens who have either a) committed suicide or b) have been diagnosed/treated for depression. To harvest such tweets, the data would have to rely upon more than Twitter alone… all information would have to be cross-referenced with other databases (like medical records) in ways that would undoubtedly de-anonymize.

So, with no guarantees of accuracy, the prospect of physical intervention by social services or similar feels like a scary one – as is the idea of ending up on a watchlist because of a bad day at school. Particularly when we don’t know how this data would be propagated forward…

Critically, I am not trying to say that the project isn’t useful, and SAS Canada are forthcoming in their acknowledgment that ethical conversations that need to take place. Nevertheless, this feels like the usual ethical caveat which acts as a disclaimer on work that has already taken place and – one might reasonably assume – is already informing actions, policies, and future projects.

Some of the correlations this work has unveiled clearly have some value, for example, there is a 39% overlap between conversations about suicide and conversations about bullying. This is a broad trend and a helpful addition to an important narrative. Where it becomes unhelpful, however, is when it enables and/or legitimizes the external surveillance of all bullying-related conversations on social media and – to carry that thought forward – some kind of ominous, state sanctioned “follow-up” for selected individuals…