The Problem with Next Generation Virtual Assistants

33433114056_ff8bc048f1_b

It may not seem like it, but there is quite an arms race going on when it comes to interactive AI and virtual assistants. Every tech company wants their offering to be more intuitive…more human. Yet although they’re improving, voice activated tech like Alexa and Siri are still pretty clunky, and often underwhelming in their interactions.

This obviously isn’t great if developers want to see them entering the workplace in such a way as to supercharge sales.  Continue reading

The Eyes Have It: Three Reasons to be Cautious About Emotion-Tracking Recruitment AI

facial recognition

Predictive, data-driven software is becoming ubiquitous, and as such our reliance upon it is steadily intensifying. The locus of knowledge is becoming external to us again for the first time since the onset of humanism in the 18th century, and we increasingly prefer the forecasts of artificially intelligent systems to our own experience or intuition.

Of all the arenas in which these predictions fascinate and compel our decision-making, perhaps the most prevalent are those that see algorithms foretell the behaviors of our fellow human beings. What they prefer, what they react to, where they go, who they’ll flirt with, whether they’re likely pay back a loan, or even commit a crime.

Quite simply, we are coming to believe that machines know us better than we can know ourselves.  Continue reading

Want Artificial Intelligence that cares about people? Ethical thinking needs to start with the researchers

We’re delighted to feature a guest post from Grainne Faller and Louise Holden of the Magna Carta For Data initiative.

The project was established in 2014 by the Insight Centre for Data Analytics  – one of the largest data research centres in Europe – as a statement of its commitment to ethical data research within its labs, and the broader global movement to embed ethics in data science research and development.

Magna Carta For Data 1

A self-driving car is hurtling towards a group of people in the middle of a narrow bridge. Should it drive on, and hit the group? Or should it drive off the bridge, avoiding the group of people but almost certainly killing its passenger? Now, what about if there are three people on the bridge but five people in the car? Can you – should you – design algorithms that will change the way the car reacts depending on these situations?

This is just one of millions of ethical issues faced by researchers of artificial intelligence and big data every hour of every day around the world. Continue reading

Five concerns about government biometric databases and facial recognition

face recognition

Last Thursday, the Australian government announced its existing “Face Verification Service” would be expanded to include personal images from every Australian driver’s license and photo ID, as well as from every passport and visa. This database will then be used to train facial recognition technology so that law enforcers can identify people within seconds, wherever they may be – on the street, in shopping malls, car parks, train stations, airports, schools, and just about anywhere that surveillance cameras pop-up…

Deep learning techniques will allow the algorithm to adapt to new information, meaning that it will have the ability to identify a face obscured by bad lighting or bad angles…and even one that has aged over several years.

This level of penetrative surveillance is obviously unprecedented, and is being heavily criticized by the country’s civil rights activists and law professors who say that Australia’s “patchwork” privacy laws have allowed successive governments to erode citizens’ rights. Nevertheless, politicians argue that personal information abounds on the internet regardless, and that it is more important that measures are taken to deter and ensnare potential terrorists.

However worthy the objective, it is obviously important to challenge such measures by trying to understand their immediate and long-term implications. Here are five glaring concerns that governments mounting similar initiatives should undoubtedly address:

  1. Hacking and security breaches

The more comprehensive a database of information is, the more attractive it becomes to hackers. No doubt the Australian government will hire top security experts as part of this project, but the methods of those intent on breaching security parameters are forever evolving, and it is no joke trying to mount a defense. Back in 2014 the US Office of Personnel Management (OPM) compromised the personal information of 22 million current and former employees due to a Chinese hack, which was one of the biggest in history. Then FBI Director James Comey said that the information included, “every place I’ve ever lived since I was 18, every foreign travel I’ve ever taken, all of my family, their addresses.”

  1. Ineffective unless coverage is total

Using surveillance, citizen data and/or national ID cards to track and monitor people in the hopes of preventing terrorist attacks (the stated intention of the Aussie government) really requires total coverage, i.e. monitoring everyone all of the time. We know this because many states with mass (but not total) surveillance programs – like the US – have still been subject to national security breaches, like the Boston Marathon bombing. Security experts are clear that targeted, rather than broad surveillance, is generally the best way to find those planning an attack, as most subjects are already on the radar of intelligence services. Perhaps Australia’s new approach aspires to some ideal notion of total coverage, but if it isn’t successful at achieving this, there’s a chance that malicious parties could evade detection by a scheme that focuses its attentions on registered citizens.

  1. Chilling effect

Following that last thought through, in the eyes of some, there is a substantial harm inflicted by this biometrically-based surveillance project: it treats all citizens and visitors as potential suspects. This may seem like a rather intangible consequence, but that isn’t necessarily the case. Implementing a facial recognition scheme could, in fact, have a substantial chilling effect. This means that law-abiding citizens may be discouraged from participating in legitimate public acts – for example, protesting the current government administration – for fear of legal repercussions down-the-line. Indeed, there are countless things we may hesitate to do if we have new concerns about instant identifiability…

  1. Mission creep

Though current governments may give their reassurances about the respectful and considered use of this data, who is to say what future administrations may wish to use it for? Might their mission creep beyond national security, and deteriorate to the point at which law enforcement use facial recognition at will to detain and prosecute individuals for very minor offenses? Might our “personal file” be updated with our known movements so that intelligence services have a comprehensive history of where we’ve been and when? Additionally, might the images used to train and update algorithms start to come from non-official sources like personal social media accounts and other platforms? Undoubtedly, it is already easy to build-up a comprehensive file on an individual using publically available data, but many would argue that governments should require a rationale – or even permission – for doing so.

  1. False positives

As all data scientists know, algorithms working with massive datasets are likely to produce false positives, i.e. such a system as proposed may implicate perfectly innocent people for crimes they didn’t commit. This has also been identified as a problem with DNA databases. The sheer number of comparisons that have to be run when, for instance, a new threat is identified, dramatically raises the possibility that some of the identifications will be in error. These odds increase if, in the cases of both DNA and facial recognition, two individuals are related. As rights campaigners point out, not only is this potentially harrowing for the individuals concerned, it also presents a harmful distraction for law enforcement and security services who might prioritize seemingly “infallible” technological insight over other useful, but contradictory leads.

Though apparently most Australians “don’t care” about the launch of this new scheme, it is morally dangerous for governments to take general apathy as a green light for action. Not caring can be a “stand-in” for all sorts of things, and of course most people are busy leading their lives. Where individual citizens may not be concerned to thrash out the real implications of an initiative, politicians and their advisors have an absolute responsibility to do so – even where the reasoning they offer is of little-to-no interest to the general population.