Last week, I was reading an excellent Wired interview with Kate Crawford of AI Now, when a remark she made lodged itself into my head. It has been percolating there ever since, probably because the topic is a rather important one: if we’re talking about the social impact of tech, shouldn’t the conversation invite and include those with expertise outside of the field of technology?
“Of course!” you might chime in, “we should all have a say in what shapes our future!”. Agreed. And yet, despite noticeably more public conversation about the social impact and ethics of tech in recent months, it often feels as though many of the louder voices are of scientists and tech experts who are simply ‘turning their hand’ to the humanities.
For example, yesterday I listened to a podcast interview with AI pioneer, Chris Boos. The conversation focused around the familiar topic of how an AI workforce might displace the current human one. Inevitably, given the lack of predictability that surrounds this subject, a fair amount of speculation issued forth. Even so, I was surprised at the confidence with which Boos assigned certain traits to humanity, boldly stating that it was about time that “machines do the work that machines are meant to do, and people do the work that people are meant to do.”
This naturally prompts the question: “what are people meant to do?” According to Boos, people are here to “make new experiences”, which seems to broadly translate as “people should work in the service industry when their jobs are mechanized”, and that this will somehow be a lot better for their psychology (early on in the interview, Boos intimated that office jobs make people depressed anyway).
When the podcast host queries whether everyone will be suited to “front of house” type-roles, the AI expert expands his theory to say that there are a couple of other categories of human (that’s three in total if you’re counting). People like artists and scientists who want to discover and build things, and people like pioneers and risk-takers who like to…well, find things and take risks. It all seems so easy when Boos explains it: “I would say that people are creative or they like to follow people.”
It seems to me that this means there are people like Chris Boos, and then there the rest of us; the proletariat if you will. And we’ll all be okay in the service industry. Hopefully.
Now, it’s important to say here that this is not a criticism of tech speculators, per se. Like Boos, they’re often enviably smart, and have some great observations to make. But whatever they are experts in, they aren’t experts in this. They are novice speculators at best. These theories (as far as I’m aware) are based on the same loose interpretations any vaguely worldly, and relatively well-read, person could make.
Yet, at the same time as such statements are being made, there are experts all over the globe – anthropologists, ethnologists, political scientists, psychologists, sociologists, historians, and (yes!) philosophers – who have been thinking about we humans, our environments, our societies, our needs, and our wants for decades. What’s more, these people are sometimes building upon the work laid down over millennia. How astonishing that such deep wells of even deeper wisdom aren’t being more readily tapped. Too often it is as though those who build the technologies of the future also get to shape its institutions, without consulting them or the bastions of knowledge that sit within them.
As Crawford tells Wired:
“We can actually do an enormous amount by increasing the level and rigor of research into the human and social impacts of these technologies. One place we think we can make a difference: Who gets a seat at the table in the design of these systems? At the moment, it’s driven by engineering and computer science experts who are designing systems that touch everything from criminal justice to healthcare to education. But in the same way that we wouldn’t expect a federal judge to optimize a neural network, we shouldn’t be expecting an engineer to understand the workings of the criminal justice system.”
Many corners of society seem to think of the work of the humanities as a kind of professional “beard-stroking”. Furthermore, they consider that it’s the sort of armchair activity that any eager amateur might wade into. To some degree, this may even be correct. We all can (and arguably should) consider and form opinions about the great challenges to our progression – like how to establish a workable ethics for the future. But one thing should not be ignored: people have been thinking about these matters for a long, long time now. Many of the dead ends and rabbit holes are already known, and there are toolkits that can aid discovery. Technology companies and associated commentators will do a lot of people, and themselves, a great disservice by ignoring them.