If Virtual Reality is Reality, is Virtual Abuse Just Abuse?

girl-4474800_1920

“If you’ve got something that is independent of your mind, which has causal powers, which you can perceive in all these ways, to me you’re a long way toward being real”, the philosopher David Chalmers recently told Prashanth Ramakrishna in an interview for the New York Times. Chalmers invoked remarks by fellow Australian philosopher Samuel Alexander who said that: “To be real is to have causal powers”, and science fiction writer Philip K. Dick who said that, “a real thing is something that doesn’t go away when you stop believing in it.” 

Professor Chalmers’ comments were made in reference to the new and increasingly sophisticated world of virtual reality; something he believes has the status of a “subreality” (or similar) within our known physical reality. A place that still exists independent of our imaginations, where actions have consequences.

Chalmers draws parallels with our trusted physical reality, which is already so illusory on many levels. After all, the brain has no direct contact with the world and is reliant upon the mediation of our senses. As the mathematician-turned-philosopher points out, science tells us that vivid experiences like color are “just a bunch of wavelengths arising from the physical reflectance properties of objects that produce a certain kind of experience in us.” 

Continue reading

Eurobots: Regulation rules in the European AI scene

The following is a guest post by Erin Green, PhD, a Brussels-based AI ethics and public engagement specialist. For more on the European scene, check out my recent interview with Hill + Knowlton Strategies “Creating Ethical Rules for AI.”

flag-3370970_1920

When it comes to the global AI stage, China and the US consistently grab headlines as their so-called arms race heats up, while countries like Japan and South Korea lead the way in innovation and social receptivity. Europe, though, is taking a slightly different approach – partly by choice, partly by design.

The 28 countries (Brexit pending) that make up the economic and political bloc of the European Union each have a stake in the AI game. Bigger, richer players like the UK (pledging 1000 places for PhDs in AI) and Germany (€3 billion invested in the coming years) are sinking eye-widening resources into keeping up with the proverbial Joneses. Smaller nations, like Malta and its not-quite 500,000 people, are turning to foreign investment and partnerships to guarantee a spot in the major leagues.

Somewhat independent of these interests, the EU itself is trying to carve out space in terms of regulatory prowess and in bringing coherence to a rather chaotic European AI scene. Think this is a bureaucratic exercise with not much reach or consequence beyond the Berlaymont? Just remember all those GDPR emails that clogged up your inbox sometime around May 25, 2018. The EU has real regulatory reach.

Continue reading

Silicon Valley’s Brain-Meddling: A New Frontier For Tech Gadgetry

mindset-programmer-machine-learning-brain-mind-think-1440817-pxhere.com

Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?”. He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.

The professor’s response?: “I think about three inches.” 

Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It is projected that the worldwide market for neurotech products – defined as “the application of electronics and engineering to the human nervous system” – will reach as much as $13.3 billion by 2022

Continue reading

Sure, AI can be creative, but it will never possess genius

Screen Shot 2019-08-26 at 6.55.36 PM

Sarah Bernhardt plays Hamlet, London 1899

“What’s Hecuba to him, or he to Hecuba, 
That he should weep for her?” 

The close of Act II Scene ii, and Hamlet questions how the performers in a play about the siege of Troy are able to convey such emotion – feel such empathy – for the stranger queen of an ancient city. 

The construct here is complex. A play within a play, sparking a key moment of introspection, and ultimately self doubt. It is no coincidence that in this same work we find perhaps the earliest use of the term “my mind’s eye,” heralding a shift in theatrical focus from traditions of enacted disputes, lovers passions, and farce, to more a more nuanced kind of drama that issues from psychological turmoil.

Hamlet is generally considered to be a work of creative genius. For many laboring in the creative arts, works like this and those in its broader category serve as aspirational benchmarks. Indelible reminders of the brilliant outlands of human creativity. 

Now, for the first time in our history, humans have a rival in deliberate acts of aesthetic creation. In the midst of the avalanche of artificial intelligence hype comes a new promise – creative AI; here to relieve us of burdensome tasks including musical, literary, and artistic composition.  

Continue reading

Why Ethical Responsibility For Tech Should Extend to Non-Users

hand-snow-light-plant-photography-leaf-1161867-pxhere.com

Last month, Oscar Schwartz wrote a byline for OneZero with a familiarly provocative headline: “What If an Algorithm Could Predict Your Unborn Child’s Intelligence?”. The piece described the work of Genomic Prediction, a US company using machine learning to pick through the genetic data of embryos to establish the risk of health conditions. Given the title of the article, the upshot won’t surprise you. Prospective parents can now use this technology to expand their domain over the “design” of new offspring – and “cognitive ability” is among the features up for selection. 

Setting aside the contention over whether intelligence is even inheritable, the ethical debate around this sort of pre-screening is hardly new. Gender selection has been a live issue for years now. Way back in 2001, Oxford University bioethicist Julian Savulescu caused controversy by proposing a principle of “Procreative Beneficence” stating that “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information.” (Opponents to procreative beneficence vociferously pointed out that  – regrettably – Savulescu’s principle would likely lead to populations dominated by tall, pale males…). 

Continue reading

Will Every Kid Get an Equal Shot at an ‘A’ In the Era of New Tech & AI?

kid education

“One child, one teacher, one book, one pen can change the world.”

These are the inspirational words of activist Malala Yousafzai, best known as “the girl who was shot by the Taliban” for championing female education in her home country of Pakistan. This modest, pared-down idea of schooling is cherished by many. There is something noble about it, perhaps because harkens back to the very roots of intellectual enquiry. No tools and no distractions; just ideas and conversation. 

Traditionalists may be reminded of the largely bygone “chalk and talk” methods of teaching, rooted in the belief that students need little more than firm, directed pedagogical instruction to prepare them for the world. Many still reminisce about these relatively uncomplicated teaching techniques, but we should be careful not to misread Yousafzai’s words as prescribing simplicity as the optimal conditions for education. 

On the contrary, her comments describe a baseline. 

Continue reading

Three Things I Learned: Living with AI (Experts)

FTF

Credit: Tanisha Bassan

There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.

In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”

I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.

Continue reading

AI, Showbiz, and Cause for Concern (x2)

Screen Shot 2019-06-05 at 4.16.55 PM

A “Virtual” or “Digital” Human. Credit: Digital Domain

The #AIShowBiz Summit 3.0 – which took place last month –  sits apart from the often dizzying array of conferences vying for the attention of Bay Area tech natives. Omnipresent AI themes like “applications for deep learning”, “algorithmic fairness”, and “the future of work” are set aside in preference for rather more dazzling conversations on topics like “digital humans”, “AI and creativity”, and “our augmented intelligence digital future.”

It’s not that there’s anything wrong with the big reoccuring AI themes. On the contrary, they are front-and-center for very good reason. It’s that there’s something just a little beguiling about this raft of rather more spacey, futuristic conversations delivered by presenters who are unflinchingly “big picture”, while still preserving necessary practical and technical detail.

Continue reading

Tech for Humans, Part 2: Designing a Human-Centered Future

YouTheData.com is delighted to feature a two-part guest post by Andrew Sears. Andrew is passionate about emerging technologies and the future we’re building with them. He’s driven innovation at companies like IBM, IDEO, and Genesis Mining with a focus on AI, cloud, and blockchain products. He serves as an Advisor at All Tech is Human and will complete his MBA at Duke University in 2020. You can keep up with his work at andrew-sears.com.

man-person-technology-steel-metal-human-1326397-pxhere.com

In Part 1 of this series, we explored the paradox of human-centered design as it is commonly practiced today: well-intentioned product teams start with the goal of empathizing deeply with humanneeds and desires, only to end up with a product that is just plain bad for humans.

In many cases, this outcome represents a failure to appreciate the complex web of values, commitments, and needs that define human experience. By understanding their users in reductively economic terms, teams build products that deliver convenience and efficiency at the cost of privacy, intimacy, and emotional wellbeing. But times are changing. The growing popularity of companies like Light, Purism, Brave, and Duck Duck Go signifies a shift in consumer preferences towards tech products that respect their users’ time, attention, privacy, and values.

Product teams now face both a social and an economic imperative to think more critically about the products they put into the world. To change their outcomes, they should start by changing their processes. Fortunately, existing design methodologies can be adapted and augmented to build products that appreciate more fully the human complexity of their users. Here are three changes that product teams should make to put the “human” in human centered design:

Continue reading