Davos 2024 convened a panel of experts to terrify and reassure us in equal measure

You’d have to be living under a rock to entirely swerve the avalanche of AI predictions for 2024. They typically fall into three camps – extreme AI optimism, extreme AI pessimism, and a sort of vanilla-flavored corporate edition whereby the author safely predicts things that are already happening.
Much more interesting was a meeting at Davos 2024 yesterday, where a panel of undisputed AI silverbacks gathered to discuss the trajectory of AI’s sexiest zeitgeist – large language models (LLMs).
Child rearing
Yann LeCun, AI lead at Meta and a passionate seeker of artificial general intelligence (AGI), gave an eminently repeatable layman’s explanation of where we are with LLMs now vs. human intelligence, using the example of a child acquiring visual knowledge.
He told the intimate Davos crowd that a human four year-old takes in about 20MB per second through its optical nerve. Using a rough estimate of that child having 16k waking hours in their first four years, and there being 3600 seconds in each hour, LeCun says this indicates that the average four year old has 50x more information than the very biggest LLMs we currently have.
It certainly gives much-needed perspective.
But what’s his point? Well, it’s that despite the impressive progress AI whizzes have made in training models with web-based text data, if they want to create a system that is truly intelligent, they will have to create new architectures that allow machines to learn from visual/sensory information. Perhaps via video. LeCun reminded us that 16k hours of video is the equivalent to about 30mins of uploads to YouTube…
Video won’t cut it
So, theoretically doable. But currently, not so much. LeCunn also said he’s spent the last 9 years trying to solve exactly this.
And while Daphne Koller was the first of the group to propose that we’re “only just scratching the surface” when it comes to data fuel for LLMs she, for one, was unconvinced that baby AI could be matured to the point of true understanding through training on data from virtual experiences and video alone.
Koller contends that to go beyond mere associations and get to something that feels like the causal reasoning humans use, systems will need to interact with the real world in an embodied way – for example, gathering input from technologies that are “out in the wild”, like augmented reality and autonomous vehicles.
She added that such systems would also need to be given the space to experiment with the world to learn, grow, and go beyond what a human can teach them. The world is complicated and replicating it “in silico” for the purposes of training would be insufficient. GPT pioneer Aidan Gomez agreed that real world experience is a critical part of the path to AGI.
AGI: Remind us why?
But for those of us that didn’t grow up with our heads in sci-fi books or glued to the movie adaptations, it hasn’t necessarily been obvious why we need to push forth to achieve or surpass human-level intelligence, or that this would be the objective of projects like ChatGPT.
Undoubtedly, there’s a degree of scientific learning that falls out of this kind of ambition (as with space exploration…), but the idea that a machine could ingest the equivalent visual data of a four year-old child in 30 minutes is perhaps more unnerving than it is exciting.
When challenged on the need to pursue AGI, Yann LeCun scoffs, but for someone that has dedicated his life to the cause, his chosen analogy seems flawed: “Should we build airplanes that go faster than birds?”, he practically snorts.
This doesn’t feel equivalent (and the other panelists weren’t too sure either).
Borrowing an efficient function from another part of nature and replicating it for human advantage feels fairly significantly different from trying to build something that outpaces its creator in all aspects of human cognition. It certainly feels less risky.
Understanding application
But while LeCun remains committed to his admirable but questionable ideal, the likes of Kai-Fu Lee, Daphne Koller, Aidan Gomez, and Andrew Ng felt that the biological measure of intelligence is, perhaps, the wrong one for artificial intelligence.
Maybe we don’t really need raise the AI baby at all?
Koller offered that there are plenty of worthy uses for LLMs that don’t actually require reasoning, and suggested that we instead think about the societal challenges that we struggle to address as humans and build super smart computers to address them. Sounds sensible.
Lee and Gomez, both of whom admitted that they’re more keen to establish the commercial value of LLMs, expressed that there is a lot that these models can do to create value without having to emulate humans. To quote Lee: “The world is already turned upside down as it is.”
What next?
It felt fitting that, even at the very highest levels of AI credibility, the conversation ultimately comes back to a relatively humdrum one about use cases.
It’s been said in various places that 2024 is the year of AI application (there’s a prediction for you!). And while it’s to be expected that some researchers will remain dedicated to earth shattering breakthroughs, ultimately the crowds at Davos – and at every other AI gathering this year – will be much hungrier for examples of how LLMs can change the world in the near-term, beyond drafting emails and writing marketing materials.