Why Ethical Responsibility For Tech Should Extend to Non-Users

hand-snow-light-plant-photography-leaf-1161867-pxhere.com

Last month, Oscar Schwartz wrote a byline for OneZero with a familiarly provocative headline: “What If an Algorithm Could Predict Your Unborn Child’s Intelligence?”. The piece described the work of Genomic Prediction, a US company using machine learning to pick through the genetic data of embryos to establish the risk of health conditions. Given the title of the article, the upshot won’t surprise you. Prospective parents can now use this technology to expand their domain over the “design” of new offspring – and “cognitive ability” is among the features up for selection. 

Setting aside the contention over whether intelligence is even inheritable, the ethical debate around this sort of pre-screening is hardly new. Gender selection has been a live issue for years now. Way back in 2001, Oxford University bioethicist Julian Savulescu caused controversy by proposing a principle of “Procreative Beneficence” stating that “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information.” (Opponents to procreative beneficence vociferously pointed out that  – regrettably – Savulescu’s principle would likely lead to populations dominated by tall, pale males…). 

Continue reading

Will Every Kid Get an Equal Shot at an ‘A’ In the Era of New Tech & AI?

kid education

“One child, one teacher, one book, one pen can change the world.”

These are the inspirational words of activist Malala Yousafzai, best known as “the girl who was shot by the Taliban” for championing female education in her home country of Pakistan. This modest, pared-down idea of schooling is cherished by many. There is something noble about it, perhaps because harkens back to the very roots of intellectual enquiry. No tools and no distractions; just ideas and conversation. 

Traditionalists may be reminded of the largely bygone “chalk and talk” methods of teaching, rooted in the belief that students need little more than firm, directed pedagogical instruction to prepare them for the world. Many still reminisce about these relatively uncomplicated teaching techniques, but we should be careful not to misread Yousafzai’s words as prescribing simplicity as the optimal conditions for education. 

On the contrary, her comments describe a baseline. 

Continue reading

Three Things I Learned: Living with AI (Experts)

FTF

Credit: Tanisha Bassan

There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.

In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”

I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.

Continue reading

AI, Showbiz, and Cause for Concern (x2)

Screen Shot 2019-06-05 at 4.16.55 PM

A “Virtual” or “Digital” Human. Credit: Digital Domain

The #AIShowBiz Summit 3.0 – which took place last month –  sits apart from the often dizzying array of conferences vying for the attention of Bay Area tech natives. Omnipresent AI themes like “applications for deep learning”, “algorithmic fairness”, and “the future of work” are set aside in preference for rather more dazzling conversations on topics like “digital humans”, “AI and creativity”, and “our augmented intelligence digital future.”

It’s not that there’s anything wrong with the big reoccuring AI themes. On the contrary, they are front-and-center for very good reason. It’s that there’s something just a little beguiling about this raft of rather more spacey, futuristic conversations delivered by presenters who are unflinchingly “big picture”, while still preserving necessary practical and technical detail.

Continue reading

Is Emotion AI a Dangerous Deceit?

man-person-black-and-white-people-hair-male-959058-pxhere.com.jpg

“How do we get humans to trust in all this AI we’re building?”, asked Affectiva CEO Rana El-Kaliouby, at the prestigious NYT New Work Summit at Half Moon Bay last week. She had already assumed a consensus that trust-building was the correct way to proceed, and went on to suggest that, rather than equipping users and consumers with the skills and tools to scrutinize AI, we should instead gently coax them into placing more unearned faith in data-driven artifacts.

But how would this be accomplished? Well, Affectiva are “on a mission to humanize technology”, drawing upon machine and deep learning to “understand all things human.” All things human, El-Kaliouby reliably informed us, would include our emotions, our cognitive state, our behaviors, our activities. Note: not to sense, or to tentatively detect, but to understand those things in “the way that humans can.”

Grandiose claims, indeed.

Continue reading

RE•WORK Interview with Fiona J McEvoy, YouTheData.com

This article was originally posted on the RE•WORK blogOriginal

The way people interact with technology is always evolving. Think about children today – give them a tablet or a smartphone and they have literally no problem in figuring out how to work it. Whilst this is a natural evolution of our relationships with new tech, as it becomes more and more ingrained in our lives it’s important to think about the ethical implications. This isn’t the first time I’ve spoken about ethics and AI – I”ve had guests on the Women in AI Podcast such as Cansu Canca from the AI Ethics Lab and Yasmin J. Erden from St Mary’s University amongst others join me to discuss this area, and I even wrote a white paper on the topic which is on RE•WORK’s digital content hub – so it’s something that’s really causing conversation at the moment. Fiona McEvoy, the founder of YouTheData.com, joined me on the podcast back in June to discuss the importance of collaboration in AI to ensure it’s ethically sound. Fiona will be joining us at the Deep Learning Summit in San Francisco this week, so in advance of this, I caught up with her to see what she’s been working on…

Continue reading

Four AI themes to watch out for in 2019

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Jonas Svidras

We’re still just a few days into the New Year and all eyes have been trained on Las Vegas, NV. Over the last week or so, the great and the good of the consumer tech industry have been shamelessly touting their wares at CES. Each jockeying to make a big noise in a crowded market by showcasing “life-enhancing products” with whizzy new features—like this “intelligent toilet”

In the organized chaos of nearly 4.5k exhibitors and a staggering 182k delegates, pundits have been working overtime to round-up the best and the rest. At the same time, commentators have been trying to distill core themes and make sage judgments about the tech trajectory of 2019.

In truth, no matter what gadgetry emerges victorious in the end of CES, there will still be some fundamental “meta themes” affecting technology in 2019. And though they may not have secured as many column inches as cutsie robots and 5G this week, these core topics are likely to have more staying power.

Continue reading

Healthbots: the new caregivers

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

healthbot.png

Movie tickets bought, travel booked, customer service problems resolved. Chatbots perform so many tasks that the best ones blend into the background of everyday transactions and are often overlooked. They’re being adopted seamlessly by one industry after the next, but their next widespread application poses unique challenges.

Now healthbots are poised to become the new frontline for triage, replacing human medical professionals as the first point of contact for the sick and the injured.

Continue reading

Woe is me: a cautionary tale of two chatbots

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

girl-road-street-city-youth-color-774334-pxhere.com

The BBC’s recent test of two popular emotional support chatbots was devastating. Designed to offer advice to stressed, grieving, or otherwise vulnerable children and young adults, the Wysa and Woebot apps failed to detect some pretty explicit indicators of child sexual abuse, drug taking, and eating disorder. Neither chatbot instructed the (thankfully imaginary) victim to seek help and instead offered up wildly inappropriate pablum.

Inappropriate responses ranged from advising a 12 year-old being forced to have sex to “keep swimming” (accompanied by an animation of a whale), to telling another “it’s nice to know more about you and what makes you happy” when they admitted they were looking forward to “throwing up” in the context of an eating disorder.

Continue reading

Making AI in our own image is a mistake

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

humanoid.jpg

When the Chinese news agency Xinhua demonstrated an AI anchorperson, the reaction of the internet was predictably voluble. Was this a gimmick or a sign of things to come? Could the Chinese government literally be turning to artificial puppets to control the editorial content of the country’s news channels? Are we careening towards a future where the humans and humanoid bots are indistinguishable?

Continue reading