AI Ethics for Startups – 7 Practical Steps

Radiologists assessing the pain experienced by osteoarthritis patients typically use a scale called the Kellgren-Lawrence Grade (KLG). The KLG calculates pain levels based on the presence of certain radiographic features, like missing cartilage or damage. But data from the National Institute of Health revealed a disparity between the level of pain as calculated by the KLG and Black patients’ self-reported experience of pain.

The MIT Technology Review explains: “Black patients who show the same amount of missing cartilage as white patients self-report higher levels of pain.”

But why?

Continue reading

Playing to the Algorithm: Are We Training the Machines or…?

It is our human inclination to want to look good. Our desire to impress keeps the fashion industry alive, it also motivates many of us to work or study hard, and there are billions of dollars to be made from our desperation to look visibly fit and healthy. So, it should come as no surprise that as algorithms hold more and more sway over decision-making and the conferral of status (e.g. via credit or hiring decisions), many of us are keen to put our best foot forward and play into their discernible preferences. 

This is certainly true of those in business, as discovered by the authors of the working paper How to Talk When A Machine is Listening: Corporate Disclosure in the Age of AI. An article posted by the National Bureau of Economic Research describes the study’s findings:

Continue reading

Here Are Five Reasons Consumers Won’t Buy Your Smart Home Device

This blog was originally posted on the Hill + Knowlton Strategies website.

This image has an empty alt attribute; its file name is Hl3QN85Gk5yd72A0J5GBI-kZ9DeAyDBAHOEFcMaTu5pXWXBpxdN16bgst-RwRs7O_2Nl3p3OGmi_Tv62ecEZ1cgEriJyW-SSJTG7IgA2IEZsIvzhKVGR4TtZiqBZE0qKNwGc-lBH
The Aware Home

In 2000, a group of researchers at Georgia Tech launched a project they called “The Aware Home.” The collective of computer scientists and engineers built a three-story experimental home with the intent of producing an environment that was “capable of knowing information about itself and the whereabouts and activities of its inhabitants.” The team installed a vast network of “context aware sensors” throughout the house and on wearable computers worn by the home’s occupants. The hope was to establish an entirely new domain of knowledge — one that would create efficiencies in home management, improve health and well-being, and provide support for groups like the elderly.

Continue reading

Silicon Valley’s Brain-Meddling: A New Frontier For Tech Gadgetry

mindset-programmer-machine-learning-brain-mind-think-1440817-pxhere.com

Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?”. He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.

The professor’s response?: “I think about three inches.” 

Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It is projected that the worldwide market for neurotech products – defined as “the application of electronics and engineering to the human nervous system” – will reach as much as $13.3 billion by 2022

Continue reading

Why Ethical Responsibility For Tech Should Extend to Non-Users

hand-snow-light-plant-photography-leaf-1161867-pxhere.com

Last month, Oscar Schwartz wrote a byline for OneZero with a familiarly provocative headline: “What If an Algorithm Could Predict Your Unborn Child’s Intelligence?”. The piece described the work of Genomic Prediction, a US company using machine learning to pick through the genetic data of embryos to establish the risk of health conditions. Given the title of the article, the upshot won’t surprise you. Prospective parents can now use this technology to expand their domain over the “design” of new offspring – and “cognitive ability” is among the features up for selection. 

Setting aside the contention over whether intelligence is even inheritable, the ethical debate around this sort of pre-screening is hardly new. Gender selection has been a live issue for years now. Way back in 2001, Oxford University bioethicist Julian Savulescu caused controversy by proposing a principle of “Procreative Beneficence” stating that “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information.” (Opponents to procreative beneficence vociferously pointed out that  – regrettably – Savulescu’s principle would likely lead to populations dominated by tall, pale males…). 

Continue reading

Three Things I Learned: Living with AI (Experts)

FTF

Credit: Tanisha Bassan

There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.

In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”

I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.

Continue reading

AI, Showbiz, and Cause for Concern (x2)

Screen Shot 2019-06-05 at 4.16.55 PM

A “Virtual” or “Digital” Human. Credit: Digital Domain

The #AIShowBiz Summit 3.0 – which took place last month –  sits apart from the often dizzying array of conferences vying for the attention of Bay Area tech natives. Omnipresent AI themes like “applications for deep learning”, “algorithmic fairness”, and “the future of work” are set aside in preference for rather more dazzling conversations on topics like “digital humans”, “AI and creativity”, and “our augmented intelligence digital future.”

It’s not that there’s anything wrong with the big reoccuring AI themes. On the contrary, they are front-and-center for very good reason. It’s that there’s something just a little beguiling about this raft of rather more spacey, futuristic conversations delivered by presenters who are unflinchingly “big picture”, while still preserving necessary practical and technical detail.

Continue reading

Tech for Humans, Part 2: Designing a Human-Centered Future

YouTheData.com is delighted to feature a two-part guest post by Andrew Sears. Andrew is passionate about emerging technologies and the future we’re building with them. He’s driven innovation at companies like IBM, IDEO, and Genesis Mining with a focus on AI, cloud, and blockchain products. He serves as an Advisor at All Tech is Human and will complete his MBA at Duke University in 2020. You can keep up with his work at andrew-sears.com.

man-person-technology-steel-metal-human-1326397-pxhere.com

In Part 1 of this series, we explored the paradox of human-centered design as it is commonly practiced today: well-intentioned product teams start with the goal of empathizing deeply with humanneeds and desires, only to end up with a product that is just plain bad for humans.

In many cases, this outcome represents a failure to appreciate the complex web of values, commitments, and needs that define human experience. By understanding their users in reductively economic terms, teams build products that deliver convenience and efficiency at the cost of privacy, intimacy, and emotional wellbeing. But times are changing. The growing popularity of companies like Light, Purism, Brave, and Duck Duck Go signifies a shift in consumer preferences towards tech products that respect their users’ time, attention, privacy, and values.

Product teams now face both a social and an economic imperative to think more critically about the products they put into the world. To change their outcomes, they should start by changing their processes. Fortunately, existing design methodologies can be adapted and augmented to build products that appreciate more fully the human complexity of their users. Here are three changes that product teams should make to put the “human” in human centered design:

Continue reading