Last month, Oscar Schwartz wrote a byline for OneZero with a familiarly provocative headline: “What If an Algorithm Could Predict Your Unborn Child’s Intelligence?”. The piece described the work of Genomic Prediction, a US company using machine learning to pick through the genetic data of embryos to establish the risk of health conditions. Given the title of the article, the upshot won’t surprise you. Prospective parents can now use this technology to expand their domain over the “design” of new offspring – and “cognitive ability” is among the features up for selection.
Setting aside the contention over whether intelligence is even inheritable, the ethical debate around this sort of pre-screening is hardly new. Gender selection has been a live issue for years now. Way back in 2001, Oxford University bioethicist Julian Savulescu caused controversy by proposing a principle of “Procreative Beneficence” stating that “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information.” (Opponents to procreative beneficence vociferously pointed out that – regrettably – Savulescu’s principle would likely lead to populations dominated by tall, pale males…).
“One child, one teacher, one book, one pen can change the world.”
These are the inspirational words of activist Malala Yousafzai, best known as “the girl who was shot by the Taliban” for championing female education in her home country of Pakistan. This modest, pared-down idea of schooling is cherished by many. There is something noble about it, perhaps because harkens back to the very roots of intellectual enquiry. No tools and no distractions; just ideas and conversation.
Traditionalists may be reminded of the largely bygone “chalk and talk” methods of teaching, rooted in the belief that students need little more than firm, directed pedagogical instruction to prepare them for the world. Many still reminisce about these relatively uncomplicated teaching techniques, but we should be careful not to misread Yousafzai’s words as prescribing simplicity as the optimal conditions for education.
On the contrary, her comments describe a baseline.
There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.
In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”
I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.
YouTheData.com is delighted to feature a two-partguest post by Andrew Sears. Andrew is passionate about emerging technologies and the future we’re building with them. He’s driven innovation at companies like IBM, IDEO, and Genesis Mining with a focus on AI, cloud, and blockchain products. He serves as an Advisor at All Tech is Human and will complete his MBA at Duke University in 2020. You can keep up with his work at andrew-sears.com.
In Part 1 of this series, we explored the paradox of human-centered design as it is commonly practiced today: well-intentioned product teams start with the goal of empathizing deeply with humanneeds and desires, only to end up with a product that is just plain bad for humans.
In many cases, this outcome represents a failure to appreciate the complex web of values, commitments, and needs that define human experience. By understanding their users in reductively economic terms, teams build products that deliver convenience and efficiency at the cost of privacy, intimacy, and emotional wellbeing. But times are changing. The growing popularity of companies like Light, Purism, Brave, and Duck Duck Go signifies a shift in consumer preferences towards tech products that respect their users’ time, attention, privacy, and values.
Product teams now face both a social and an economic imperative to think more critically about the products they put into the world. To change their outcomes, they should start by changing their processes. Fortunately, existing design methodologies can be adapted and augmented to build products that appreciate more fully the human complexity of their users. Here are three changes that product teams should make to put the “human” in human centered design:
The way people interact with technology is always evolving. Think about children today – give them a tablet or a smartphone and they have literally no problem in figuring out how to work it. Whilst this is a natural evolution of our relationships with new tech, as it becomes more and more ingrained in our lives it’s important to think about the ethical implications. This isn’t the first time I’ve spoken about ethics and AI – I”ve had guests on the Women in AI Podcast such as Cansu Canca from the AI Ethics Lab and Yasmin J. Erden from St Mary’s University amongst others join me to discuss this area, and I even wrote a white paper on the topic which is on RE•WORK’s digital content hub – so it’s something that’s really causing conversation at the moment. Fiona McEvoy, the founder of YouTheData.com, joined me on the podcast back in June to discuss the importance of collaboration in AI to ensure it’s ethically sound. Fiona will be joining us at the Deep Learning Summit in San Francisco this week, so in advance of this, I caught up with her to see what she’s been working on…
This article by Fiona J McEvoy (YouTheData.com) was originally posted onAll Turtles.
Facebook’s and Google’s new home-based devices are designed to improve the way we live and interact in our personal time. These tech giants, along with vast swathes of smaller AI firms, are looking to upgrade and streamline our domestic experiences including how we share, relax, connect, and shop.
The veritable avalanche of new gizmos vying for a place in our most private spaces constitutes a true home invasion, and while many have voiced concerns about privacy and the security of personal data, fewer have considered what this might mean for the human condition.
Episodes like the Cambridge Analytica scandal opened our eyes to the fact that some of our nearest and dearest technologies had become fully socialized before we truly understood the full force of their influence. Consequently, new tools and gadgets coming down the line are being closely examined so that we can begin to uncover any damaging consequences that could manifest 10, 20, or even 100 years from now.
“As we march further into a world in which human-AI distinctions are blurred, we need to ask whether we are comfortable chasing this kind of dupe… Just how important is it that our conversational bots sound exactly like real humans?” Read more.
Read You The Data @ Slate
What Are Your Augmented Reality Property Rights?
“We were unprepared for many of the consequences of social media. Now is the time to address the many questions raised by the coming ubiquity of augmented reality.” Read more.
The Cambridge Analytica scandal is still reverberating in the media, garnering almost as much daily coverage as when the story broke in The New York Times on March 17. Facebook’s mishandling of user data has catalyzed a collective public reaction of disgust and indignation, and perhaps the most prominent public manifestation of this is the #DeleteFacebook movement. This vocal campaign is urging us to do exactly what it says: To vote with our feet. To boycott. To not just deactivate our Facebook accounts, but to eliminate them entirely. Continue reading →