Healthbots: the new caregivers

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

healthbot.png

Movie tickets bought, travel booked, customer service problems resolved. Chatbots perform so many tasks that the best ones blend into the background of everyday transactions and are often overlooked. They’re being adopted seamlessly by one industry after the next, but their next widespread application poses unique challenges.

Now healthbots are poised to become the new frontline for triage, replacing human medical professionals as the first point of contact for the sick and the injured.

Continue reading

Woe is me: a cautionary tale of two chatbots

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

girl-road-street-city-youth-color-774334-pxhere.com

The BBC’s recent test of two popular emotional support chatbots was devastating. Designed to offer advice to stressed, grieving, or otherwise vulnerable children and young adults, the Wysa and Woebot apps failed to detect some pretty explicit indicators of child sexual abuse, drug taking, and eating disorder. Neither chatbot instructed the (thankfully imaginary) victim to seek help and instead offered up wildly inappropriate pablum.

Inappropriate responses ranged from advising a 12 year-old being forced to have sex to “keep swimming” (accompanied by an animation of a whale), to telling another “it’s nice to know more about you and what makes you happy” when they admitted they were looking forward to “throwing up” in the context of an eating disorder.

Continue reading

Making AI in our own image is a mistake

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

humanoid.jpg

When the Chinese news agency Xinhua demonstrated an AI anchorperson, the reaction of the internet was predictably voluble. Was this a gimmick or a sign of things to come? Could the Chinese government literally be turning to artificial puppets to control the editorial content of the country’s news channels? Are we careening towards a future where the humans and humanoid bots are indistinguishable?

Continue reading

AI needs cooperation, not an arms race

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

drone-camera-isolated-background-helicopter-technology-1446057-pxhere.com.jpg

Writing in the New York Times recently, venture capitalist Kai-Fu Lee signaled an important, oncoming change in the way we think about artificial intelligence. We are graduating, he cautioned, from an age of discovery and vision into a more practical era of implementation.

Lee is promoting his new book, titled A.I. Superpowers: China, Silicon Valley, and the New World Order, and he suggests that this transition from lab to launchpad may naturally privilege Chinese advantages—like data abundance and government investment—above the research capabilities and “freewheeling intellectual environment” of the U.S.

Continue reading

Peer pressure: An unintended consequence of AI

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Peer pressure

Last winter, Kylie Jenner tweeted that she stopped using Snapchat, and almost immediately the company’s shares dropped six-percent, losing $1.3 billion in value. Her seemingly innocent comments had led investors to believe that the 20-year-old’s 25 million followers would do the same, and the knock-on effect would seal the social media apps fate as a “has been” among its key demographic of younger women.

This astonishing event demonstrates in technicolor how the notion of influence is evolving, latterly taking on a new significance. In the age of technology, though influence is still associated with power, it is no longer the limited reserve of “the Powerful”—i.e. those in recognized positions of authority, like bankers, lawyers, or politicians.

Continue reading

AI that understands how you really feel

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Emotion AI 3

The way we interact with technology keeps changing. Of late, many more of us are using speech and gesture to give instructions to our devices, and it’s actually starting to feel natural. We tell Alexa to turn the lights off, silence our smart watches by smothering them with our palms, and unlock our phones with a look. For this to work as seamlessly as it does, our devices have to pensively watch and listen to us. Pretty soon they could begin to understand and anticipate our emotional needs, too.

The move towards what’s been called implicit understanding – in contrast with explicit interaction – will be facilitated by technologies like emotion-tracking AI. Technology that uses cues from our vocal tone, facial expressions and other micro-movements to determine our mood and, from there, our needs. According to researchers at Gartner, very soon our fridge will be able to suggest food to match our feelings, and research VP Annette Zimmerman has even claimed that, “By 2022, your personal device will know more about your emotional state than your own family.”

Continue reading

Responsibility & AI: ‘We All Have A Role When It Comes To Shaping The Future’

This article was originally written for the RE•WORK guest blog. This week YouTheData.com founder, Fiona McEvoy, will speak on a panel at the San Francisco Summit

artificial-intelligence-698154_1280

The world is changing, and that change is being driven by new and emerging technologies. They are evolving the way we behave in our homes, work spaces, public places, vehicles, and with respect to our bodies, pastimes, and associates. All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.

As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?

Continue reading

The Negative Feedback Loop: Technology Needs To Know When It Gets Things Wrong

Feedback loop

Cathy O’Neil’s now infamous book, Weapons of Math Destruction, talks about the pernicious feedback loop that can result from contentious “predictive policing” AI. She warns that the models at the heart of this technology can sometimes reflect damaging historical biases learned from police records that are used as training data.

For example, it is perfectly possible for a neighborhood to have a higher number of recorded arrests due to past aggressive or racist policing policies, rather than a particularly high instance of crime. But the unthinking algorithm doesn’t recognize this untold story and will blindly forge ahead, predicting the future will mirror the past and recommending the deployment more police to these “hotspot” areas.

Naturally, the police then make more arrests on these sites, and the net result is that the algorithm receives data that makes its association to grow even stronger.

Continue reading

The Problem with Next Generation Virtual Assistants

33433114056_ff8bc048f1_b

It may not seem like it, but there is quite an arms race going on when it comes to interactive AI and virtual assistants. Every tech company wants their offering to be more intuitive…more human. Yet although they’re improving, voice activated tech like Alexa and Siri are still pretty clunky, and often underwhelming in their interactions.

This obviously isn’t great if developers want to see them entering the workplace in such a way as to supercharge sales.  Continue reading

The Eyes Have It: Three Reasons to be Cautious About Emotion-Tracking Recruitment AI

facial recognition

Predictive, data-driven software is becoming ubiquitous, and as such our reliance upon it is steadily intensifying. The locus of knowledge is becoming external to us again for the first time since the onset of humanism in the 18th century, and we increasingly prefer the forecasts of artificially intelligent systems to our own experience or intuition.

Of all the arenas in which these predictions fascinate and compel our decision-making, perhaps the most prevalent are those that see algorithms foretell the behaviors of our fellow human beings. What they prefer, what they react to, where they go, who they’ll flirt with, whether they’re likely pay back a loan, or even commit a crime.

Quite simply, we are coming to believe that machines know us better than we can know ourselves.  Continue reading