In a recent interview with TIME, ahead of receiving the TIME100 Impact Award in Dubai, Meta’s chief AI scientist, Yann LeCun, debunked the near-term expectations of achieving human-level artificial intelligence.
He discussed the limitations of current AI technologies, the future of open-source AI, and dismissed the notion that AI could pose an existential threat to humanity.
Yann LeCun discusses the barriers to achieving AGI, Meta’s open-source approach, and AI riskhttps://t.co/SOwBoqbVtb
— TIME (@TIME) February 13, 2024
Also read: Trading Firms Amplify Use of AI – JPMorgan Survey
The reality of artificial general intelligence
LeCun expressed his uncertainty about the chances of getting artificial general intelligence (AGI) via training LLMs with more and more data and computing resources. He stressed that LLMs are inherently limited, regardless of their capabilities to operate on a large scale. These models, he pointed out, tend to hallucinate, lack real-world understanding, and cannot reason or plan beyond their training, so they are not a road directly to AGI.
“We see today that those systems hallucinate; they don’t really understand the real world.”
LeCun expressed his preference for a different terminology, saying in no uncertain terms that human intelligence is not as general as the AGI claim suggests. Contrarily, he proposes approaching intelligent machines that exhibit human-level intelligence, such as understanding the physical world, thinking, and deciding on purposeful actions to achieve a goal—things that current AI lacks. LeCun’s excellent simile compared a baby’s learning process to an AI’s.
“A baby learns how the world works in the first few months of life. We don’t know how to do this [with AI].”
While a large language model can process all the text available on the public web, it does not have the embodied and sensory-rich experience a child gets in the initial years. This data collection discrepancy is a striking reason for the absence of common sense in AI and why it arrives at the wrong conclusion, which humans find evident. LeCun’s thesis implies that even the intelligence level of a cat is quite far away and challenging to reach.
The future of open-source AI
LeCun also highlighted the importance of open-source AI development, a policy Meta has adopted by releasing its advanced language models, including Llama 2. He argued that since these AI systems are becoming more embedded in online interaction, it is imperative that these technologies be accessible and can be tailored by the global public. This approach, he believes, will ensure that AI assistants can cater to the diverse languages, cultures, and values around the world.
“So the future has to be open source, if nothing else, for reasons of cultural diversity, democracy, and diversity.”
On the same accord, when asked about the dangers of open sourcing, LeCun dismissed concerns. He stated that current AI systems do not inherently make it easier for malevolent uses, such as developing bioweapons, because they lack the capability to generate genuinely new ideas. Additionally, he labeled the notion that AI could present a fundamental threat to human existence as “preposterous.”
He refuted the idea that intelligence equates to a desire for dominance, pointing out that the drive for power is not correlated with intelligence. Moreover, he believes that as AI technology advances, it will be possible to design systems with safety guardrails that are reliable, useful, and safe. LeCun anticipates that the good AI developed by ethical researchers will always be a step ahead of any potentially harmful AI, ensuring societal protection.
“What needs to be done is for society in general, the good guys, to stay ahead by progressing. And then it’s my good AI against your bad AI.”
Looking ahead: expectations from Llama 3
In closing, LeCun teased the development of Llama 3, hinting at improved performance and new features like video multimodality. Although still in the training phase, the upcoming model represents Meta’s ongoing focus on advancing AI technology while addressing its widespread use’s ethical and societal implications.