Silicon Valley’s Brain-Meddling: A New Frontier For Tech Gadgetry

mindset-programmer-machine-learning-brain-mind-think-1440817-pxhere.com

Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?”. He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.

The professor’s response?: “I think about three inches.” 

Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It is projected that the worldwide market for neurotech products – defined as “the application of electronics and engineering to the human nervous system” – will reach as much as $13.3 billion by 2022

Continue reading

Three Things I Learned: Living with AI (Experts)

FTF

Credit: Tanisha Bassan

There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.

In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”

I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.

Continue reading

RE•WORK Interview with Fiona J McEvoy, YouTheData.com

This article was originally posted on the RE•WORK blogOriginal

The way people interact with technology is always evolving. Think about children today – give them a tablet or a smartphone and they have literally no problem in figuring out how to work it. Whilst this is a natural evolution of our relationships with new tech, as it becomes more and more ingrained in our lives it’s important to think about the ethical implications. This isn’t the first time I’ve spoken about ethics and AI – I”ve had guests on the Women in AI Podcast such as Cansu Canca from the AI Ethics Lab and Yasmin J. Erden from St Mary’s University amongst others join me to discuss this area, and I even wrote a white paper on the topic which is on RE•WORK’s digital content hub – so it’s something that’s really causing conversation at the moment. Fiona McEvoy, the founder of YouTheData.com, joined me on the podcast back in June to discuss the importance of collaboration in AI to ensure it’s ethically sound. Fiona will be joining us at the Deep Learning Summit in San Francisco this week, so in advance of this, I caught up with her to see what she’s been working on…

Continue reading

Four AI themes to watch out for in 2019

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Jonas Svidras

We’re still just a few days into the New Year and all eyes have been trained on Las Vegas, NV. Over the last week or so, the great and the good of the consumer tech industry have been shamelessly touting their wares at CES. Each jockeying to make a big noise in a crowded market by showcasing “life-enhancing products” with whizzy new features—like this “intelligent toilet”

In the organized chaos of nearly 4.5k exhibitors and a staggering 182k delegates, pundits have been working overtime to round-up the best and the rest. At the same time, commentators have been trying to distill core themes and make sage judgments about the tech trajectory of 2019.

In truth, no matter what gadgetry emerges victorious in the end of CES, there will still be some fundamental “meta themes” affecting technology in 2019. And though they may not have secured as many column inches as cutsie robots and 5G this week, these core topics are likely to have more staying power.

Continue reading

Woe is me: a cautionary tale of two chatbots

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

girl-road-street-city-youth-color-774334-pxhere.com

The BBC’s recent test of two popular emotional support chatbots was devastating. Designed to offer advice to stressed, grieving, or otherwise vulnerable children and young adults, the Wysa and Woebot apps failed to detect some pretty explicit indicators of child sexual abuse, drug taking, and eating disorder. Neither chatbot instructed the (thankfully imaginary) victim to seek help and instead offered up wildly inappropriate pablum.

Inappropriate responses ranged from advising a 12 year-old being forced to have sex to “keep swimming” (accompanied by an animation of a whale), to telling another “it’s nice to know more about you and what makes you happy” when they admitted they were looking forward to “throwing up” in the context of an eating disorder.

Continue reading

Good Gadgets: The rise of socially conscious tech

robot-1214536_1280

From algorithmic bias to killer robots, fake news, and the now almost daily prophesying about the dangers of AI, it’s fair to say that tech is under scrutiny.

Episodes like the Cambridge Analytica scandal opened our eyes to the fact that some of our nearest and dearest technologies had become fully socialized before we truly understood the full force of their influence. Consequently, new tools and gadgets coming down the line are being closely examined so that we can begin to uncover any damaging consequences that could manifest 10, 20, or even 100 years from now.

Continue reading

You The Data: Our Posts Elsewhere!

File:Evolution-des-wissens.jpg

Read You The Data @ All Turtles

What The Google Duplex Debate Tells Us

“As we march further into a world in which human-AI distinctions are blurred, we need to ask whether we are comfortable chasing this kind of dupe… Just how important is it that our conversational bots sound exactly like real humans?” Read more.

Read You The Data @ Slate

What Are Your Augmented Reality Property Rights?

“We were unprepared for many of the consequences of social media. Now is the time to address the many questions raised by the coming ubiquity of augmented reality.” Read more. 

 

If you’d like to feature a contributor post on your blog or news site, please contact us here

The Negative Feedback Loop: Technology Needs To Know When It Gets Things Wrong

Feedback loop

Cathy O’Neil’s now infamous book, Weapons of Math Destruction, talks about the pernicious feedback loop that can result from contentious “predictive policing” AI. She warns that the models at the heart of this technology can sometimes reflect damaging historical biases learned from police records that are used as training data.

For example, it is perfectly possible for a neighborhood to have a higher number of recorded arrests due to past aggressive or racist policing policies, rather than a particularly high instance of crime. But the unthinking algorithm doesn’t recognize this untold story and will blindly forge ahead, predicting the future will mirror the past and recommending the deployment more police to these “hotspot” areas.

Naturally, the police then make more arrests on these sites, and the net result is that the algorithm receives data that makes its association to grow even stronger.

Continue reading

In the future, we could solve all crime. But at what cost?

It’s difficult to read, or even talk about technology at the moment without that word “ethics” creeping in. How will AI products affect users down-the-line? Can algorithmic decisions factor in the good of society? How might we reduce the number of fatal road collisions? What tools can we employ to prevent or solve all crime?

surveillance

Now, let’s just make it clear from the off: these are all entirely honorable motives, and their proponents should be lauded. But sometimes even the drive toward an admiral aim – the prevention bad consequences – can ignore critical tensions that have been vexing thinkers for years.

Even if we agree that the consequences of an act are of real import, there are still other human values that can – and should – compete with them when we’re assimilating the best course of action. Continue reading