Three Things I Learned: Living with AI (Experts)

FTF

Credit: Tanisha Bassan

There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.

In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”

I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.

Continue reading

AI, Showbiz, and Cause for Concern (x2)

Screen Shot 2019-06-05 at 4.16.55 PM

A “Virtual” or “Digital” Human. Credit: Digital Domain

The #AIShowBiz Summit 3.0 – which took place last month –  sits apart from the often dizzying array of conferences vying for the attention of Bay Area tech natives. Omnipresent AI themes like “applications for deep learning”, “algorithmic fairness”, and “the future of work” are set aside in preference for rather more dazzling conversations on topics like “digital humans”, “AI and creativity”, and “our augmented intelligence digital future.”

It’s not that there’s anything wrong with the big reoccuring AI themes. On the contrary, they are front-and-center for very good reason. It’s that there’s something just a little beguiling about this raft of rather more spacey, futuristic conversations delivered by presenters who are unflinchingly “big picture”, while still preserving necessary practical and technical detail.

Continue reading

Tech for Humans, Part 2: Designing a Human-Centered Future

YouTheData.com is delighted to feature a two-part guest post by Andrew Sears. Andrew is passionate about emerging technologies and the future we’re building with them. He’s driven innovation at companies like IBM, IDEO, and Genesis Mining with a focus on AI, cloud, and blockchain products. He serves as an Advisor at All Tech is Human and will complete his MBA at Duke University in 2020. You can keep up with his work at andrew-sears.com.

man-person-technology-steel-metal-human-1326397-pxhere.com

In Part 1 of this series, we explored the paradox of human-centered design as it is commonly practiced today: well-intentioned product teams start with the goal of empathizing deeply with humanneeds and desires, only to end up with a product that is just plain bad for humans.

In many cases, this outcome represents a failure to appreciate the complex web of values, commitments, and needs that define human experience. By understanding their users in reductively economic terms, teams build products that deliver convenience and efficiency at the cost of privacy, intimacy, and emotional wellbeing. But times are changing. The growing popularity of companies like Light, Purism, Brave, and Duck Duck Go signifies a shift in consumer preferences towards tech products that respect their users’ time, attention, privacy, and values.

Product teams now face both a social and an economic imperative to think more critically about the products they put into the world. To change their outcomes, they should start by changing their processes. Fortunately, existing design methodologies can be adapted and augmented to build products that appreciate more fully the human complexity of their users. Here are three changes that product teams should make to put the “human” in human centered design:

Continue reading