Voice controlled technologies have made steady progress lately. The adaptability and applicability of voice as an interface is beginning to surprise us all. At the Startup Grind Festival last week, a handful of seasoned “voice entrepreneurs” described to eager newbies how sports fans are already calling on virtual assistants to read out their team’s results, and how we’ll all soon be using conversational AI to select our clothes as part of the regular morning routine. And that’s just in the home. There’s also a lot of chatter about how voice control could take some of the heavy-lifting in the workplace.
The message is clear: voice is here to stay. We’re tired of scrolling, sorting and reviewing. We’re ready for an army of intelligent servants to do our bidding.
This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.
The BBC’s recent test of two popular emotional support chatbots was devastating. Designed to offer advice to stressed, grieving, or otherwise vulnerable children and young adults, the Wysa and Woebot apps failed to detect some pretty explicit indicators of child sexual abuse, drug taking, and eating disorder. Neither chatbot instructed the (thankfully imaginary) victim to seek help and instead offered up wildly inappropriate pablum.
Inappropriate responses ranged from advising a 12 year-old being forced to have sex to “keep swimming” (accompanied by an animation of a whale), to telling another “it’s nice to know more about you and what makes you happy” when they admitted they were looking forward to “throwing up” in the context of an eating disorder.