“GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing.”
Regini Rini, Philosopher
In recent years, the AI circus really has come to town and we’ve been treated to a veritable parade of technical aberrations seeking to dazzle us with their human-like intelligence. Many of these sideshows have been “embodied” AI, where the physical form usually functions as a cunning disguise for a clunky, pre-programmed bot. Like the world’s first “AI anchor”, launched by a Chinese TV network and — how could we ever forget — Sophia, Saudi Arabia’s first robotic citizen.
But last month there was a furore around something altogether more serious. A system The Verge called, “an invention that could end up defining the decade to come.” It’s name is GPT-3, and it could certainly make our future a lot more complicated.
So, what is all the fuss about? And how might this supposed tectonic shift in technological development change the lives of the rest of us ?
Writing for Aeon last week, Martin Parker, a professor of organization studies at the University of Bristol in the UK, relayed the origins of the word “management”, explaining:
“It is derived from the Italian mano, meaning hand, and it’s expansion into maneggiare, the activity of handling and training a horse carried out in a maneggio – a riding school. From this form of manual control, the word has expanded into a general activity of training and handling people. It is a word that originates with ideas of control, of a docile or wilful creature that must be subordinated to the instructions of the master.”
Though we might prefer to believe that its meaning has evolved since then to convey something more respectful and collaborative, it is still the case that workplace leaders and managers have mastery over their staff. Promotions, opportunities, hirings and firings — all life-altering events — are subject to their authority.
It is a mighty responsibility, and abuse of managerial power can have devastating consequences.
There is strong evidence to show that subject-specific experts frequently fall short on their informed judgments. Particularly when it comes to forecasting.
In fact, in 2005 the University of Pennsylvania’s Professor Philip E. Tetlock devised a test for seasoned and respected commentators that found as their level of expertise rose, their confidence also rose – but not their accuracy. Repeatedly, Tetlock’s experts attached high probability to low frequency events in error, relying upon intuitive casual reasoning rather than probabilistic reasoning. Their assertions were often no more reliable than, to quote the experimenter, “a dart throwing chimp.”
I was reminded of Tetlock’s ensuing book and other similar experiments at the Future Trends Forum in Madrid last month; an event that (valiantly) attempts to convene a room full of thought leaders and task them with predicting our future. Specifically, in this case, our AI future.