There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.
In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”
I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.
Now let me be clear, my purpose here is not to question the validity of the gathering, or the intelligence of my very capable fellow delegates. Rather, it is merely to lament that prediction is difficult in any given context – and this event served as proof that predicting the unpredictable is very difficult indeed.
Over two days of presentations and workshops, there were many important takeaways. Here are three of them:
- There is more tension than we acknowledge
Its sometimes easy to live in a tech ethics bubble and assume that we’ve reached something of a consensus when it comes to key concerns about the societal impact of tech. In truth, to many in the business community the conversation around responsible AI is still a pesky sideshow that threatens to strangle the core values of capitalism.
In Madrid, certain factions were resistant – and occasionally combatant – in response to comments about the observable conflict between commercial business goals and the public good. At best, those that did pay lip service to the topic seemed to mostly feel that the ethics of tech could be tackled with automated solutions, self-determined industry guardrails, and layman intuition. At worst, societal concern was viewed as a mass of intangible and insoluble problems; a series of “trolley problems” better shelved for another day (and another crowd).
It seemed as though many industry players felt neither compelled nor incentivized to spend much time or money considering the long-term ramifications of their products or services. And that’s despite good indications that failing to identify and mitigate foreseeable infringements – like psychological damage, discriminatory practices, or the knowing exploitation of cognitive flaws – will be a misstep that could shore-up bigger problems in a world where sensitivity around the use of tech is growing, not subsiding.
- We don’t know what to do about jobs
Even with a number of incredibly smart people around the table, the question mark over the future of work remained as bold and troubling as ever. Although we had the usual speculation around the emergence of new industries yielding new/different roles, there wasn’t a great deal of clarity around who would be put out of work in the near term. Truck drivers? Administrators? Customer service workers? Supermarket cashiers? Blue collar workers as a category? Suggestions were piecemeal, with no real sense of timings or identifiable “waves” of change.
Moreover, while Universal Basic Income was summarily dismissed as an idea, there few alternatives on offer. Though several attendees concluded that governments across the globe should “prepare” for mass worklessness, few articulated exactly what that preparation would look like. A tax on machines? Perhaps, perhaps not. One thing that solicited semi-agreement was the idea that a true societal strategy for AI upheaval is now urgent.
As a group, we worked mostly on the assumption that mass AI automation is desirable. Our favored future world being one in which everything was “optimized” for productivity, creativity, and success at scale. Yet, as one delegate perceptively pointed out, we are supposing that by automating basic process-driven tasks we open-up time for execs to run at 100% each working day. This forgets that often simple admin chores like timesheets and expenses claims give human workers necessary downtime that allows them to recharge.
The idea that the forward march of automation necessarily leads to an exponential boost in human performance could be a damaging misnomer if it runs workers with true intelligence into the ground…
- Let’s not lose our minds (because we won’t)
When speculating about future worlds in which AI is prevalent, there is an undeniable tendency to forget that technological development is not directly proportional to cognitive atrophy. As machines get smarter, there is even the likelihood that we’ll get smarter alongside them – particularly as adaptive and intelligent education evolves.
Yet, in spite of this so many conversations presume that we will collectively neglect to scrutinize and challenge AI where it encroaches on or undermines us. We imagine our future selves as totally acquiescent to what we perceive as the “superior” speed and intelligence of forthcoming machines. And yet that’s not reflective of the human character at all. Indeed, we are already gathering in droves to anticipate unintended consequences, call for regulation, and consider when we do and do not approve of AI deployment. Over the next few years it is highly unlikely that we will adopt a different, more complacent stance.
Furthermore, it is just unhelpful to envisage near-future scenarios in which humans act at the behest of AI when – in reality – the systems we currently have struggle to park cars and tell humans from other mammals in photographs. Unhelpful because, for the near term (if not forever), human intervention will be absolutely critical ,and we need to understand exactly where our semantic understanding is needed most.
More constructive would’ve been an attempt to get to grips with exactly how the human intelligence can be used to help ground and marshal AI devices. Yet, strangely, some delegates chose to instead to lose themselves in imaginings that seemed direct from the pages of science fiction novels. They are not alone. The temptation seems unavoidable for a great many commentators…
I should be frank, gatherings like the Future Trends Forum are absolutely to be encouraged (and I very much look forward to reading the resultant report). We should be having this dialogue at scale in a whole range of different environments and with an unlimited number of stakeholders.
But although, statistically speaking, our predictions are unlikely to hit the mark, I would venture that using realistic scenarios as our “jumping-off point” would dramatically bolster our chances.
One thought on “Three Things I Learned: Living with AI (Experts)”
Pingback: How Do We Solve A Problem Like Election Prediction? | You The Data