Dig out your tinfoil hat! Consumer neurotech is here to stay – and it needs more scrutiny

“Thoughts are free and subject to no rule. On them rests the freedom of man, and they tower above the light of nature”

Philippus Aureolus Paracelsus (1493-1541)

This week, Facebook Reality Labs revealed the latest piece of hardware gadgetry that it hopes will introduce eager consumers to a new world of augmented and mixed reality. The wristband is a type of technology known as a neural — or brain-computer — interface, and can read the electrical nerve signals our brain sends to our muscles and interpret them as instructions.

In other words, you don’t have to move. You can just *think* your movements.

You’d be forgiven for wondering if we’ve evolved too far..

A jazzy, high production video features grinning young San Francisco-type execs describing this new, immersive experience. They’ve invented it, and they’ll be damned if they aren’t going to foist it upon us.  “The wrist is a great starting point for us technologically,” one chirps, “because it opens up new and dynamic forms of control.” Quite. 

Continue reading

Why Employee Surveillance Is Not Okay

File:An analysis of horsemanship - teaching the whole art of riding, in the manege, military, hunting, racing, and travelling system - together with the method of breaking horses, for every purpose to (18165467302).jpg

Writing for Aeon last week, Martin Parker, a professor of organization studies at the University of Bristol in the UK, relayed the origins of the word “management”, explaining:

“It is derived from the Italian mano, meaning hand, and it’s expansion into maneggiare, the activity of handling and training a horse carried out in a maneggio – a riding school. From this form of manual control, the word has expanded into a general activity of training and handling people. It is a word that originates with ideas of control, of a docile or wilful creature that must be subordinated to the instructions of the master.”

Though we might prefer to believe that its meaning has evolved since then to convey something more respectful and collaborative, it is still the case that workplace leaders and managers have mastery over their staff. Promotions, opportunities, hirings and firings — all life-altering events — are subject to their authority. 

It is a mighty responsibility, and abuse of managerial power can have devastating consequences. 

Continue reading

Silicon Valley’s Brain-Meddling: A New Frontier For Tech Gadgetry

mindset-programmer-machine-learning-brain-mind-think-1440817-pxhere.com

Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?”. He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.

The professor’s response?: “I think about three inches.” 

Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It is projected that the worldwide market for neurotech products – defined as “the application of electronics and engineering to the human nervous system” – will reach as much as $13.3 billion by 2022

Continue reading

AI, Showbiz, and Cause for Concern (x2)

Screen Shot 2019-06-05 at 4.16.55 PM

A “Virtual” or “Digital” Human. Credit: Digital Domain

The #AIShowBiz Summit 3.0 – which took place last month –  sits apart from the often dizzying array of conferences vying for the attention of Bay Area tech natives. Omnipresent AI themes like “applications for deep learning”, “algorithmic fairness”, and “the future of work” are set aside in preference for rather more dazzling conversations on topics like “digital humans”, “AI and creativity”, and “our augmented intelligence digital future.”

It’s not that there’s anything wrong with the big reoccuring AI themes. On the contrary, they are front-and-center for very good reason. It’s that there’s something just a little beguiling about this raft of rather more spacey, futuristic conversations delivered by presenters who are unflinchingly “big picture”, while still preserving necessary practical and technical detail.

Continue reading

The Problem with Next Generation Virtual Assistants

33433114056_ff8bc048f1_b

It may not seem like it, but there is quite an arms race going on when it comes to interactive AI and virtual assistants. Every tech company wants their offering to be more intuitive…more human. Yet although they’re improving, voice activated tech like Alexa and Siri are still pretty clunky, and often underwhelming in their interactions.

This obviously isn’t great if developers want to see them entering the workplace in such a way as to supercharge sales.  Continue reading

The Eyes Have It: Three Reasons to be Cautious About Emotion-Tracking Recruitment AI

facial recognition

Predictive, data-driven software is becoming ubiquitous, and as such our reliance upon it is steadily intensifying. The locus of knowledge is becoming external to us again for the first time since the onset of humanism in the 18th century, and we increasingly prefer the forecasts of artificially intelligent systems to our own experience or intuition.

Of all the arenas in which these predictions fascinate and compel our decision-making, perhaps the most prevalent are those that see algorithms foretell the behaviors of our fellow human beings. What they prefer, what they react to, where they go, who they’ll flirt with, whether they’re likely pay back a loan, or even commit a crime.

Quite simply, we are coming to believe that machines know us better than we can know ourselves.  Continue reading

Want Artificial Intelligence that cares about people? Ethical thinking needs to start with the researchers

We’re delighted to feature a guest post from Grainne Faller and Louise Holden of the Magna Carta For Data initiative.

The project was established in 2014 by the Insight Centre for Data Analytics  – one of the largest data research centres in Europe – as a statement of its commitment to ethical data research within its labs, and the broader global movement to embed ethics in data science research and development.

Magna Carta For Data 1

A self-driving car is hurtling towards a group of people in the middle of a narrow bridge. Should it drive on, and hit the group? Or should it drive off the bridge, avoiding the group of people but almost certainly killing its passenger? Now, what about if there are three people on the bridge but five people in the car? Can you – should you – design algorithms that will change the way the car reacts depending on these situations?

This is just one of millions of ethical issues faced by researchers of artificial intelligence and big data every hour of every day around the world. Continue reading

Five concerns about government biometric databases and facial recognition

face recognition

Last Thursday, the Australian government announced its existing “Face Verification Service” would be expanded to include personal images from every Australian driver’s license and photo ID, as well as from every passport and visa. This database will then be used to train facial recognition technology so that law enforcers can identify people within seconds, wherever they may be – on the street, in shopping malls, car parks, train stations, airports, schools, and just about anywhere that surveillance cameras pop-up…

Deep learning techniques will allow the algorithm to adapt to new information, meaning that it will have the ability to identify a face obscured by bad lighting or bad angles…and even one that has aged over several years.

This level of penetrative surveillance is obviously unprecedented, and is being heavily criticized by the country’s civil rights activists and law professors who say that Australia’s “patchwork” privacy laws have allowed successive governments to erode citizens’ rights. Nevertheless, politicians argue that personal information abounds on the internet regardless, and that it is more important that measures are taken to deter and ensnare potential terrorists.

However worthy the objective, it is obviously important to challenge such measures by trying to understand their immediate and long-term implications. Here are five glaring concerns that governments mounting similar initiatives should undoubtedly address:

  1. Hacking and security breaches

The more comprehensive a database of information is, the more attractive it becomes to hackers. No doubt the Australian government will hire top security experts as part of this project, but the methods of those intent on breaching security parameters are forever evolving, and it is no joke trying to mount a defense. Back in 2014 the US Office of Personnel Management (OPM) compromised the personal information of 22 million current and former employees due to a Chinese hack, which was one of the biggest in history. Then FBI Director James Comey said that the information included, “every place I’ve ever lived since I was 18, every foreign travel I’ve ever taken, all of my family, their addresses.”

  1. Ineffective unless coverage is total

Using surveillance, citizen data and/or national ID cards to track and monitor people in the hopes of preventing terrorist attacks (the stated intention of the Aussie government) really requires total coverage, i.e. monitoring everyone all of the time. We know this because many states with mass (but not total) surveillance programs – like the US – have still been subject to national security breaches, like the Boston Marathon bombing. Security experts are clear that targeted, rather than broad surveillance, is generally the best way to find those planning an attack, as most subjects are already on the radar of intelligence services. Perhaps Australia’s new approach aspires to some ideal notion of total coverage, but if it isn’t successful at achieving this, there’s a chance that malicious parties could evade detection by a scheme that focuses its attentions on registered citizens.

  1. Chilling effect

Following that last thought through, in the eyes of some, there is a substantial harm inflicted by this biometrically-based surveillance project: it treats all citizens and visitors as potential suspects. This may seem like a rather intangible consequence, but that isn’t necessarily the case. Implementing a facial recognition scheme could, in fact, have a substantial chilling effect. This means that law-abiding citizens may be discouraged from participating in legitimate public acts – for example, protesting the current government administration – for fear of legal repercussions down-the-line. Indeed, there are countless things we may hesitate to do if we have new concerns about instant identifiability…

  1. Mission creep

Though current governments may give their reassurances about the respectful and considered use of this data, who is to say what future administrations may wish to use it for? Might their mission creep beyond national security, and deteriorate to the point at which law enforcement use facial recognition at will to detain and prosecute individuals for very minor offenses? Might our “personal file” be updated with our known movements so that intelligence services have a comprehensive history of where we’ve been and when? Additionally, might the images used to train and update algorithms start to come from non-official sources like personal social media accounts and other platforms? Undoubtedly, it is already easy to build-up a comprehensive file on an individual using publically available data, but many would argue that governments should require a rationale – or even permission – for doing so.

  1. False positives

As all data scientists know, algorithms working with massive datasets are likely to produce false positives, i.e. such a system as proposed may implicate perfectly innocent people for crimes they didn’t commit. This has also been identified as a problem with DNA databases. The sheer number of comparisons that have to be run when, for instance, a new threat is identified, dramatically raises the possibility that some of the identifications will be in error. These odds increase if, in the cases of both DNA and facial recognition, two individuals are related. As rights campaigners point out, not only is this potentially harrowing for the individuals concerned, it also presents a harmful distraction for law enforcement and security services who might prioritize seemingly “infallible” technological insight over other useful, but contradictory leads.

Though apparently most Australians “don’t care” about the launch of this new scheme, it is morally dangerous for governments to take general apathy as a green light for action. Not caring can be a “stand-in” for all sorts of things, and of course most people are busy leading their lives. Where individual citizens may not be concerned to thrash out the real implications of an initiative, politicians and their advisors have an absolute responsibility to do so – even where the reasoning they offer is of little-to-no interest to the general population.