ChatGPT: A Cautionary Tale (With Some Positive Takeaways)

I haven’t posted in a while. In truth, there hasn’t been a lot that’s piqued my interest, and there are now elaborate global mechanisms and a squadron of eager commentators prepped and ready to address the issues I used to point at on this humble blog. In November, I could’ve written something predictable about the impact of ChatGPT, but I felt like I’d already played that tune back in 2020 when I attempted to summarize the intelligent thoughts of some philosophers.

ChatGPT. GPT-3. Potato. Potato.

The most interesting aspects of this kind of AI are yet to come, I don’t doubt that. But I am here to share a cautionary tale that syncs nicely with my ramblings over the last 5 (5??) years. It’s a story about reliance and truth. About the quest for knowledge, and how it almost always involves some level of fumbling around in the dark, but never more so than now.

The Uncanny Valley and the Meaning of Irony

There has been a lot of discussion about how human is too human when it comes to robots, bots, and other types of disembodied AI voices. An interest in this topic led to a frustrating Google search which led me to…you guessed it…ChatGPT.

What did we ever do without it? I’m starting to forget.

Continue reading

AI Ethics for Startups – 7 Practical Steps

Radiologists assessing the pain experienced by osteoarthritis patients typically use a scale called the Kellgren-Lawrence Grade (KLG). The KLG calculates pain levels based on the presence of certain radiographic features, like missing cartilage or damage. But data from the National Institute of Health revealed a disparity between the level of pain as calculated by the KLG and Black patients’ self-reported experience of pain.

The MIT Technology Review explains: “Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.”

But why?

Continue reading

Intentional Harm: Preparing for an Onslaught of AI-Enabled Crime

“The degree to which this diversity criminal acts may be enhanced by use of AI depends significantly on how embedded they are in a computational environment: robotics is rapidly advancing, but AI is better suited to participate in a bank fraud than a pub brawl. This preference for the digital rather than the physical world is a weak defence though as contemporary society is profoundly dependent on complex computational networks.”

AI-enabled future crime report

The field of AI ethics has received much (very worthy) attention of late. Once an obscure topic relegated to the sidelines of both tech and ethics conversations, the subject is now at the heart of a lively dialogue among the media, politicians, and even the general public. Everyone now has a perspective on how new technologies can harm human lives, and this can only have a preventative effect in the longterm. 

But whether it’s algorithmic bias, intrusive surveillance technology, or social engineering by coercive online platforms, the current discourse tends to center on the overzealous, questionable or destructive use of new tech, rather than outright criminality. Yet it would be foolish to discount the very real prospect of AI being systematically weaponized for unequivocally criminal purposes. 

As AI technology refines and multiplies, so do the methods of would-be attackers and fraudsters. And as our world becomes more networked, the attack surface grows and grows. 

In short, it is a very exciting time to be a technically-minded crook. 

Continue reading

Silicon Valley’s Brain-Meddling: A New Frontier For Tech Gadgetry

mindset-programmer-machine-learning-brain-mind-think-1440817-pxhere.com

Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?”. He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.

The professor’s response?: “I think about three inches.” 

Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It is projected that the worldwide market for neurotech products – defined as “the application of electronics and engineering to the human nervous system” – will reach as much as $13.3 billion by 2022

Continue reading

Three Things I Learned: Living with AI (Experts)

FTF

Credit: Tanisha Bassan

There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.

In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”

I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.

Continue reading

RE•WORK Interview with Fiona J McEvoy, YouTheData.com

This article was originally posted on the RE•WORK blogOriginal

The way people interact with technology is always evolving. Think about children today – give them a tablet or a smartphone and they have literally no problem in figuring out how to work it. Whilst this is a natural evolution of our relationships with new tech, as it becomes more and more ingrained in our lives it’s important to think about the ethical implications. This isn’t the first time I’ve spoken about ethics and AI – I”ve had guests on the Women in AI Podcast such as Cansu Canca from the AI Ethics Lab and Yasmin J. Erden from St Mary’s University amongst others join me to discuss this area, and I even wrote a white paper on the topic which is on RE•WORK’s digital content hub – so it’s something that’s really causing conversation at the moment. Fiona McEvoy, the founder of YouTheData.com, joined me on the podcast back in June to discuss the importance of collaboration in AI to ensure it’s ethically sound. Fiona will be joining us at the Deep Learning Summit in San Francisco this week, so in advance of this, I caught up with her to see what she’s been working on…

Continue reading

Four AI themes to watch out for in 2019

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

Jonas Svidras

We’re still just a few days into the New Year and all eyes have been trained on Las Vegas, NV. Over the last week or so, the great and the good of the consumer tech industry have been shamelessly touting their wares at CES. Each jockeying to make a big noise in a crowded market by showcasing “life-enhancing products” with whizzy new features—like this “intelligent toilet”

In the organized chaos of nearly 4.5k exhibitors and a staggering 182k delegates, pundits have been working overtime to round-up the best and the rest. At the same time, commentators have been trying to distill core themes and make sage judgments about the tech trajectory of 2019.

In truth, no matter what gadgetry emerges victorious in the end of CES, there will still be some fundamental “meta themes” affecting technology in 2019. And though they may not have secured as many column inches as cutsie robots and 5G this week, these core topics are likely to have more staying power.

Continue reading

Woe is me: a cautionary tale of two chatbots

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

girl-road-street-city-youth-color-774334-pxhere.com

The BBC’s recent test of two popular emotional support chatbots was devastating. Designed to offer advice to stressed, grieving, or otherwise vulnerable children and young adults, the Wysa and Woebot apps failed to detect some pretty explicit indicators of child sexual abuse, drug taking, and eating disorder. Neither chatbot instructed the (thankfully imaginary) victim to seek help and instead offered up wildly inappropriate pablum.

Inappropriate responses ranged from advising a 12 year-old being forced to have sex to “keep swimming” (accompanied by an animation of a whale), to telling another “it’s nice to know more about you and what makes you happy” when they admitted they were looking forward to “throwing up” in the context of an eating disorder.

Continue reading

Good Gadgets: The rise of socially conscious tech

robot-1214536_1280

From algorithmic bias to killer robots, fake news, and the now almost daily prophesying about the dangers of AI, it’s fair to say that tech is under scrutiny.

Episodes like the Cambridge Analytica scandal opened our eyes to the fact that some of our nearest and dearest technologies had become fully socialized before we truly understood the full force of their influence. Consequently, new tools and gadgets coming down the line are being closely examined so that we can begin to uncover any damaging consequences that could manifest 10, 20, or even 100 years from now.

Continue reading