The Evolution of Human-Centered Tech @ Last Month’s CES 2020

An edited version of this article appeared on the H+K Strategies website earlier this month. File:Ces-consumer-electronics-show-las-vegas-greg-bulla.jpg

The dust has now settled after the madness of the world’s biggest annual tech fest, the Consumer Electronics Show (CES) in Las Vegas, NV. Since the show’s kick-off in early January, a parade of weird and wonderful new devices have dominated tech news and bylines; from lab produced pork to RollBot, Charmin’s robotic savior for those “stranded on the commode without a roll.”

The event itself really isn’t for the faint-hearted. It’s easy to feel overwhelmed by the sheer volume of companies vying to embed their (often ridiculous) tech gadgetry into our lives – both at work and at play. There is, of course, lots of money to be made from finding that elusive sweet spot; the point at which problem-solving, convenience, and affordability converge.

Continue reading

AI, Showbiz, and Cause for Concern (x2)

Screen Shot 2019-06-05 at 4.16.55 PM

A “Virtual” or “Digital” Human. Credit: Digital Domain

The #AIShowBiz Summit 3.0 – which took place last month –  sits apart from the often dizzying array of conferences vying for the attention of Bay Area tech natives. Omnipresent AI themes like “applications for deep learning”, “algorithmic fairness”, and “the future of work” are set aside in preference for rather more dazzling conversations on topics like “digital humans”, “AI and creativity”, and “our augmented intelligence digital future.”

It’s not that there’s anything wrong with the big reoccuring AI themes. On the contrary, they are front-and-center for very good reason. It’s that there’s something just a little beguiling about this raft of rather more spacey, futuristic conversations delivered by presenters who are unflinchingly “big picture”, while still preserving necessary practical and technical detail.

Continue reading

Five concerns about government biometric databases and facial recognition

face recognition

Last Thursday, the Australian government announced its existing “Face Verification Service” would be expanded to include personal images from every Australian driver’s license and photo ID, as well as from every passport and visa. This database will then be used to train facial recognition technology so that law enforcers can identify people within seconds, wherever they may be – on the street, in shopping malls, car parks, train stations, airports, schools, and just about anywhere that surveillance cameras pop-up…

Deep learning techniques will allow the algorithm to adapt to new information, meaning that it will have the ability to identify a face obscured by bad lighting or bad angles…and even one that has aged over several years.

This level of penetrative surveillance is obviously unprecedented, and is being heavily criticized by the country’s civil rights activists and law professors who say that Australia’s “patchwork” privacy laws have allowed successive governments to erode citizens’ rights. Nevertheless, politicians argue that personal information abounds on the internet regardless, and that it is more important that measures are taken to deter and ensnare potential terrorists.

However worthy the objective, it is obviously important to challenge such measures by trying to understand their immediate and long-term implications. Here are five glaring concerns that governments mounting similar initiatives should undoubtedly address:

  1. Hacking and security breaches

The more comprehensive a database of information is, the more attractive it becomes to hackers. No doubt the Australian government will hire top security experts as part of this project, but the methods of those intent on breaching security parameters are forever evolving, and it is no joke trying to mount a defense. Back in 2014 the US Office of Personnel Management (OPM) compromised the personal information of 22 million current and former employees due to a Chinese hack, which was one of the biggest in history. Then FBI Director James Comey said that the information included, “every place I’ve ever lived since I was 18, every foreign travel I’ve ever taken, all of my family, their addresses.”

  1. Ineffective unless coverage is total

Using surveillance, citizen data and/or national ID cards to track and monitor people in the hopes of preventing terrorist attacks (the stated intention of the Aussie government) really requires total coverage, i.e. monitoring everyone all of the time. We know this because many states with mass (but not total) surveillance programs – like the US – have still been subject to national security breaches, like the Boston Marathon bombing. Security experts are clear that targeted, rather than broad surveillance, is generally the best way to find those planning an attack, as most subjects are already on the radar of intelligence services. Perhaps Australia’s new approach aspires to some ideal notion of total coverage, but if it isn’t successful at achieving this, there’s a chance that malicious parties could evade detection by a scheme that focuses its attentions on registered citizens.

  1. Chilling effect

Following that last thought through, in the eyes of some, there is a substantial harm inflicted by this biometrically-based surveillance project: it treats all citizens and visitors as potential suspects. This may seem like a rather intangible consequence, but that isn’t necessarily the case. Implementing a facial recognition scheme could, in fact, have a substantial chilling effect. This means that law-abiding citizens may be discouraged from participating in legitimate public acts – for example, protesting the current government administration – for fear of legal repercussions down-the-line. Indeed, there are countless things we may hesitate to do if we have new concerns about instant identifiability…

  1. Mission creep

Though current governments may give their reassurances about the respectful and considered use of this data, who is to say what future administrations may wish to use it for? Might their mission creep beyond national security, and deteriorate to the point at which law enforcement use facial recognition at will to detain and prosecute individuals for very minor offenses? Might our “personal file” be updated with our known movements so that intelligence services have a comprehensive history of where we’ve been and when? Additionally, might the images used to train and update algorithms start to come from non-official sources like personal social media accounts and other platforms? Undoubtedly, it is already easy to build-up a comprehensive file on an individual using publically available data, but many would argue that governments should require a rationale – or even permission – for doing so.

  1. False positives

As all data scientists know, algorithms working with massive datasets are likely to produce false positives, i.e. such a system as proposed may implicate perfectly innocent people for crimes they didn’t commit. This has also been identified as a problem with DNA databases. The sheer number of comparisons that have to be run when, for instance, a new threat is identified, dramatically raises the possibility that some of the identifications will be in error. These odds increase if, in the cases of both DNA and facial recognition, two individuals are related. As rights campaigners point out, not only is this potentially harrowing for the individuals concerned, it also presents a harmful distraction for law enforcement and security services who might prioritize seemingly “infallible” technological insight over other useful, but contradictory leads.

Though apparently most Australians “don’t care” about the launch of this new scheme, it is morally dangerous for governments to take general apathy as a green light for action. Not caring can be a “stand-in” for all sorts of things, and of course most people are busy leading their lives. Where individual citizens may not be concerned to thrash out the real implications of an initiative, politicians and their advisors have an absolute responsibility to do so – even where the reasoning they offer is of little-to-no interest to the general population.