This article was originally written for the RE•WORK guest blog. This week YouTheData.com founder, Fiona McEvoy, will speak on a panel at the San Francisco Summit.
The world is changing, and that change is being driven by new and emerging technologies. They are evolving the way we behave in our homes, work spaces, public places, vehicles, and with respect to our bodies, pastimes, and associates. All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.
As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?
While some in the industry will see these questions quibbling and obstructive, many are waking up.
Recently we’ve seen a sharp upsurge in conversations around the relatively new topic of AI ethics, and some technology leaders have begun reaching out to those beyond of their own disciplines. Specifically, they are soliciting advice from experts in the humanities with a much longer history of wrangling with these difficult ethical questions. And yet still, there is a degree of opacity when it comes to exactly how these new interactions are playing into product development.
For reasons that are somewhat understandable, at present much of this tech ethics talk happens behind closed doors, and typically only engages a handful of industry and academic voices. Currently, these elite figures are the only participants in a dialogue that will determine all of our futures. At least in part, I started YouTheData.com because I wanted to bring “ivory tower” discussions down to the level of the engaged consumer, and be part of efforts to democratize this particular consultation process. As a former campaigner, I place a lot of value in public awareness and scrutiny.
To be clear, the message I wish to convey is not a criticism of the worthy academic and advisory work being done in this field (indeed, I have some small hand in this myself). It’s about acknowledging that engineers, technologists – and now ethicists, philosophers and others – still ultimately need public assent and a level of consumer “buy in” that is only really possible when complex ideas are made more accessible.
And don’t be mistaken. I’m not talking about crowd-sourcing ethics, but about introducing new levels of transparency around how companies and technologists decide which actions should be privileged for the greater good. To this end, collaborative efforts like Partnership on AI (to name just one) have noble objectives and substantial resources, but their work is still a long way away from penetrating the general public psyche.
Many of my peers in the humanities would perhaps baulk at the idea I forward. At pushing for the involvement of non-experts when qualified academics have struggled (and continue to struggle) to get their feet under the tech table. Indeed, when Cathy O’Neill, author of the epoch-defining Weapons of Math Destruction wrote this New York Times article back in November and asserted that ethical scholars were “asleep at the wheel” when it came to holding tech companies to account, it was met with widespread incredulity. In response to the “ethical awakening”, it seems many tech executives and engineers became ethics hobbyists, actually ignoring seasoned experts and fueling suspicions about suppressant political agendas.
It seems to me that the two critical points here are complementary, rather than mutually exclusive. The first, is that academic scrutiny is of the utmost importance, and technologists simply must continue open themselves up further to critique. Even where it threatens to reshape plans, as this excellent blog by Jeffery Moro argues that it might (and sometimes necessarily must). This includes extending a permanent invite to the tech party to participants from the humanities, social sciences, and other external disciplines. They are now a vital part of the dialogue about whatever comes next.
The second, is that many more of us from all disciplines must make ourselves conduits to the rest of society. We must go out of our way to explain and inform where we are able. To simplify complex problems as much as we can without losing all sense. It is not enough to publish papers or attend conferences. To confer with one another in private. If we are to achieve real and proper scrutiny – of both the good and the bad – this requires a critical mass of informed support that goes well beyond the Valley and hallowed academic halls.