The concept of a passport is probably older than you think. Though it might be heavily associated with the early days of international air travel, the documents actually date back to the early 15th century. Indeed, Shakespeare himself has King Henry V use the term in his famous Crispin’s Day speech at the Battle of Agincourt:
“Rather proclaim it, Westmorland, through my host,
That he which hath no stomach to this fight,
Let him depart; his passport shall be made.” (Henry V, Act IV, Scene iii)
In February last year, the world baulked as the media reported that a South Korean broadcaster had used virtual reality technology to “reunite” a grieving mother with the 7-year old child she lost in 2016.
As part of a documentary entitled I Met You, Jang Ji-sung was confronted by an animated and lifelike vision of her daughter Na-yeon as she played in a neighborhood park in her favorite dress. It was an emotionally charged scene, with the avatar asking the tearful woman, “Mom, where have you been? Have you been thinking of me?”.
“Always”, the mother replied.
Remarkably, documentary makers saw this scene as “heartwarming”, but many felt that something was badly wrong. Ethicists, like Dr. Blaby Whitby from the University of Sussex, cautioned the media: “We just don’t know the psychological effects of being “reunited” with someone in this way.”
“If you’ve got something that is independent of your mind, which has causal powers, which you can perceive in all these ways, to me you’re a long way toward being real”, the philosopher David Chalmers recently told Prashanth Ramakrishnain an interview for the New York Times. Chalmers invoked remarks by fellow Australian philosopher Samuel Alexander who said that: “To be real is to have causal powers”, and science fiction writer Philip K. Dick who said that, “a real thing is something that doesn’t go away when you stop believing in it.”
Professor Chalmers’ comments were made in reference to the new and increasingly sophisticated world of virtual reality; something he believes has the status of a “subreality” (or similar) within our known physical reality. A place that still exists independent of our imaginations, where actions have consequences.
Chalmers draws parallels with our trusted physical reality, which is already so illusory on many levels. After all, the brain has no direct contact with the world and is reliant upon the mediation of our senses. As the mathematician-turned-philosopher points out, science tells us that vivid experiences like color are “just a bunch of wavelengths arising from the physical reflectance properties of objects that produce a certain kind of experience in us.”
Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?”. He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.
The professor’s response?: “I think about three inches.”
Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It is projected that the worldwide market for neurotech products – defined as “the application of electronics and engineering to the human nervous system” – will reach as much as $13.3 billion by 2022.
There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.
In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”
I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.
YouTheData.com is delighted to feature a two-partguest post by Andrew Sears. Andrew is passionate about emerging technologies and the future we’re building with them. He’s driven innovation at companies like IBM, IDEO, and Genesis Mining with a focus on AI, cloud, and blockchain products. He serves as an Advisor at All Tech is Human and will complete his MBA at Duke University in 2020. You can keep up with his work at andrew-sears.com.
In Part 1 of this series, we explored the paradox of human-centered design as it is commonly practiced today: well-intentioned product teams start with the goal of empathizing deeply with humanneeds and desires, only to end up with a product that is just plain bad for humans.
In many cases, this outcome represents a failure to appreciate the complex web of values, commitments, and needs that define human experience. By understanding their users in reductively economic terms, teams build products that deliver convenience and efficiency at the cost of privacy, intimacy, and emotional wellbeing. But times are changing. The growing popularity of companies like Light, Purism, Brave, and Duck Duck Go signifies a shift in consumer preferences towards tech products that respect their users’ time, attention, privacy, and values.
Product teams now face both a social and an economic imperative to think more critically about the products they put into the world. To change their outcomes, they should start by changing their processes. Fortunately, existing design methodologies can be adapted and augmented to build products that appreciate more fully the human complexity of their users. Here are three changes that product teams should make to put the “human” in human centered design:
The way people interact with technology is always evolving. Think about children today – give them a tablet or a smartphone and they have literally no problem in figuring out how to work it. Whilst this is a natural evolution of our relationships with new tech, as it becomes more and more ingrained in our lives it’s important to think about the ethical implications. This isn’t the first time I’ve spoken about ethics and AI – I”ve had guests on the Women in AI Podcast such as Cansu Canca from the AI Ethics Lab and Yasmin J. Erden from St Mary’s University amongst others join me to discuss this area, and I even wrote a white paper on the topic which is on RE•WORK’s digital content hub – so it’s something that’s really causing conversation at the moment. Fiona McEvoy, the founder of YouTheData.com, joined me on the podcast back in June to discuss the importance of collaboration in AI to ensure it’s ethically sound. Fiona will be joining us at the Deep Learning Summit in San Francisco this week, so in advance of this, I caught up with her to see what she’s been working on…
This article by Fiona J McEvoy (YouTheData.com) was originally posted onAll Turtles.
We’re still just a few days into the New Year and all eyes have been trained on Las Vegas, NV. Over the last week or so, the great and the good of the consumer tech industry have been shamelessly touting their wares at CES. Each jockeying to make a big noise in a crowded market by showcasing “life-enhancing products” with whizzy new features—like this “intelligent toilet”…
In the organized chaos of nearly 4.5k exhibitors and a staggering 182k delegates, pundits have been working overtime to round-up the best and the rest. At the same time, commentators have been trying to distill core themes and make sage judgments about the tech trajectory of 2019.
In truth, no matter what gadgetry emerges victorious in the end of CES, there will still be some fundamental “meta themes” affecting technology in 2019. And though they may not have secured as many column inches as cutsie robots and 5G this week, these core topics are likely to have more staying power.
This article by Fiona J McEvoy (YouTheData.com) was originally posted onAll Turtles.
Movie tickets bought, travel booked, customer service problems resolved. Chatbots perform so many tasks that the best ones blend into the background of everyday transactions and are often overlooked. They’re being adopted seamlessly by one industry after the next, but their next widespread application poses unique challenges.
Now healthbots are poised to become the new frontline for triage, replacing human medical professionals as the first point of contact for the sick and the injured.