ChatGPT: A Cautionary Tale (With Some Positive Takeaways)

I haven’t posted in a while. In truth, there hasn’t been a lot that’s piqued my interest, and there are now elaborate global mechanisms and a squadron of eager commentators prepped and ready to address the issues I used to point at on this humble blog. In November, I could’ve written something predictable about the impact of ChatGPT, but I felt like I’d already played that tune back in 2020 when I attempted to summarize the intelligent thoughts of some philosophers.

ChatGPT. GPT-3. Potato. Potato.

The most interesting aspects of this kind of AI are yet to come, I don’t doubt that. But I am here to share a cautionary tale that syncs nicely with my ramblings over the last 5 (5??) years. It’s a story about reliance and truth. About the quest for knowledge, and how it almost always involves some level of fumbling around in the dark, but never more so than now.

The Uncanny Valley and the Meaning of Irony

There has been a lot of discussion about how human is too human when it comes to robots, bots, and other types of disembodied AI voices. An interest in this topic led to a frustrating Google search which led me to…you guessed it…ChatGPT.

What did we ever do without it? I’m starting to forget.

Continue reading

Insidious “corrective” image filters allow app creators to dictate beauty standards

Portrait thought to be of Simonetta Carraneo Vespucci by Sandro Botticelli c.1480-1485.

In the 15th century, Florentine statesman and all-round bigwig Lorenzo d’Medici (also modestly known as “Lorenzo The Magnificent”) made some pretty outspoken comments on the looks and deportment of the ideal Italian Renaissance beauty. Despite himself being described as “quite strikingly ugly“, Lorenzo was rather specific on what should be considered desirable, basing his high standards on celebrated noblewoman Simonetta Carraneo Vespucci. He writes:

of an attractive and ideal height; the tone of her skin, white but not pale, fresh but not glowing; her demeanor was grave but not proud, sweet and pleasing, without frivolity or fear. Her eyes were lively and her gaze restrained, without trace of pride or meanness; her body was so well proportioned, that among other women she appeared dignified…in walking and dancing…and in all her movements she was elegant and attractive; her hands were the most beautiful that Nature could create. She dressed in those fashions which suited a noble and gentle lady…” (Commento del magnifico Lorenzo De’ Medici sopra alcuni de’ suoi sonetti)

Clearly beauty standards have evolved since Lorenzo’s time — and thankfully we’re probably less concerned about the restraint of our gaze and the beauty of our hands — but this notion of one common beauty ideal for women, dictated from without, unfortunately persists. And while Renaissance women agonized about achieving Simonetta’s bodily proportions and alabaster skin, their 21st century counterparts are turning to technological, and even surgical correction to emulate the new, algorithmically dictated standards for attention-worthy good looks.

Continue reading

From Pandemic to Panopticon: Are We Habituating Aggressive Surveillance?

lego-1044891_1920

In Shoshana Zuboff’s 2019 book The Age of Surveillance Capitalism, she recalls the response to the launch of Google Glass in 2012. Zuboff describes public horror, as well as loud protestations from privacy advocates who were deeply concerned that the product’s undetectable recording of people and places threatened to eliminate “a person’s reasonable expectation of privacy and/or anonymity.” 

Zuboff describes the product:

Google Glass combined computation, communication, photography, GPS tracking, data retrieval, and audio and video recording capabilities in a wearable format patterned on eyeglasses. The data it gathered — location, audio, video, photos, and other personal information — moved from the device to Google’s servers.

At the time, campaigners warned of a potential chilling effect on the population if Google Glass were to be married with new facial recognition technology, and in 2013 a congressional privacy caucus asked then Google CEO Larry Page for assurances on privacy safeguards for the product. 

Eventually, after visceral public rejection, Google parked Glass in 2015 with a short blog announcing that they would be working on future versions. And although we never saw the relaunch of a follow-up consumer Glass, the product didn’t disappear into the sunset as some had predicted. Instead, Google took the opportunity to regroup and redirect, unwilling to turn its back on the chance of harvesting valuable swathes of what Zuboff terms “behavioral surplus data”, or cede this wearables turf to a rival. 

Continue reading

Six Technologies Getting us Through the Pandemic

Bored

With COVID-19 lockdown restrictions issued across the globe, millions of us have been forced to hunker down “in place”, or severely limit our movements outside of the home. On learning this, most will have reached reflexively for the nearest device — if we didn’t learn it from that device to begin with. Yet mostly we are cinched in a love-hate relationship with the presiding artefacts of our time; and often we resent tech’s power over us

Nevertheless, new circumstances can breed new attitudes. Despite having spent the last few years debating whether or not technology will destroy us, March 2020 could be the month that at least partially redeems our faith in technology by demonstrating how fortunate we are to have some incredibly sophisticated tools in our homes.

For many, they are currently the sole portal to the outside world. 

Continue reading

Why Ethical Responsibility For Tech Should Extend to Non-Users

hand-snow-light-plant-photography-leaf-1161867-pxhere.com

Last month, Oscar Schwartz wrote a byline for OneZero with a familiarly provocative headline: “What If an Algorithm Could Predict Your Unborn Child’s Intelligence?”. The piece described the work of Genomic Prediction, a US company using machine learning to pick through the genetic data of embryos to establish the risk of health conditions. Given the title of the article, the upshot won’t surprise you. Prospective parents can now use this technology to expand their domain over the “design” of new offspring – and “cognitive ability” is among the features up for selection. 

Setting aside the contention over whether intelligence is even inheritable, the ethical debate around this sort of pre-screening is hardly new. Gender selection has been a live issue for years now. Way back in 2001, Oxford University bioethicist Julian Savulescu caused controversy by proposing a principle of “Procreative Beneficence” stating that “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information.” (Opponents to procreative beneficence vociferously pointed out that  – regrettably – Savulescu’s principle would likely lead to populations dominated by tall, pale males…). 

Continue reading

Will Every Kid Get an Equal Shot at an ‘A’ In the Era of New Tech & AI?

kid education

“One child, one teacher, one book, one pen can change the world.”

These are the inspirational words of activist Malala Yousafzai, best known as “the girl who was shot by the Taliban” for championing female education in her home country of Pakistan. This modest, pared-down idea of schooling is cherished by many. There is something noble about it, perhaps because harkens back to the very roots of intellectual enquiry. No tools and no distractions; just ideas and conversation. 

Traditionalists may be reminded of the largely bygone “chalk and talk” methods of teaching, rooted in the belief that students need little more than firm, directed pedagogical instruction to prepare them for the world. Many still reminisce about these relatively uncomplicated teaching techniques, but we should be careful not to misread Yousafzai’s words as prescribing simplicity as the optimal conditions for education. 

On the contrary, her comments describe a baseline. 

Continue reading

Three Things I Learned: Living with AI (Experts)

FTF

Credit: Tanisha Bassan

There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.

In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”

I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.

Continue reading

Tech for Humans, Part 2: Designing a Human-Centered Future

YouTheData.com is delighted to feature a two-part guest post by Andrew Sears. Andrew is passionate about emerging technologies and the future we’re building with them. He’s driven innovation at companies like IBM, IDEO, and Genesis Mining with a focus on AI, cloud, and blockchain products. He serves as an Advisor at All Tech is Human and will complete his MBA at Duke University in 2020. You can keep up with his work at andrew-sears.com.

man-person-technology-steel-metal-human-1326397-pxhere.com

In Part 1 of this series, we explored the paradox of human-centered design as it is commonly practiced today: well-intentioned product teams start with the goal of empathizing deeply with humanneeds and desires, only to end up with a product that is just plain bad for humans.

In many cases, this outcome represents a failure to appreciate the complex web of values, commitments, and needs that define human experience. By understanding their users in reductively economic terms, teams build products that deliver convenience and efficiency at the cost of privacy, intimacy, and emotional wellbeing. But times are changing. The growing popularity of companies like Light, Purism, Brave, and Duck Duck Go signifies a shift in consumer preferences towards tech products that respect their users’ time, attention, privacy, and values.

Product teams now face both a social and an economic imperative to think more critically about the products they put into the world. To change their outcomes, they should start by changing their processes. Fortunately, existing design methodologies can be adapted and augmented to build products that appreciate more fully the human complexity of their users. Here are three changes that product teams should make to put the “human” in human centered design:

Continue reading

RE•WORK Interview with Fiona J McEvoy, YouTheData.com

This article was originally posted on the RE•WORK blogOriginal

The way people interact with technology is always evolving. Think about children today – give them a tablet or a smartphone and they have literally no problem in figuring out how to work it. Whilst this is a natural evolution of our relationships with new tech, as it becomes more and more ingrained in our lives it’s important to think about the ethical implications. This isn’t the first time I’ve spoken about ethics and AI – I”ve had guests on the Women in AI Podcast such as Cansu Canca from the AI Ethics Lab and Yasmin J. Erden from St Mary’s University amongst others join me to discuss this area, and I even wrote a white paper on the topic which is on RE•WORK’s digital content hub – so it’s something that’s really causing conversation at the moment. Fiona McEvoy, the founder of YouTheData.com, joined me on the podcast back in June to discuss the importance of collaboration in AI to ensure it’s ethically sound. Fiona will be joining us at the Deep Learning Summit in San Francisco this week, so in advance of this, I caught up with her to see what she’s been working on…

Continue reading

The end of household chores? Be careful what you wish for

This article by Fiona J McEvoy (YouTheData.com) was originally posted on All Turtles.

black-and-white-white-plane-machine-lighting-toy-861126-pxhere.com

Facebook’s and Google’s new home-based devices are designed to improve the way we live and interact in our personal time. These tech giants, along with vast swathes of smaller AI firms, are looking to upgrade and streamline our domestic experiences including how we share, relax, connect, and shop.

The veritable avalanche of new gizmos vying for a place in our most private spaces constitutes a true home invasion, and while many have voiced concerns about privacy and the security of personal data, fewer have considered what this might mean for the human condition.

Continue reading