‘Godfathers of AI’ Yoshua Bengio and Yann LeCun weigh in on potential of human-level AI, emerging risks and future frontiers at NUS lectures
With ChatGPT and other artificial intelligence (AI) tools now able to write college essays and turn photographs into anime art, how close are we to the rise of a superhuman AI that poses an existential threat to humans?
Renowned AI pioneers, Professor Yoshua Bengio and Professor Yann LeCun, presented contrasting perspectives on the future of the field in separate lectures that marked the launch of the NUS120 Distinguished Speaker Series last week. The series is part of a line-up of events held to celebrate the University’s 120th anniversary this year.
Prof Bengio, the founder and scientific advisor of Mila – Quebec AI Institute, warned of the potentially catastrophic consequences of AI, calling for “guardrails” to prevent the technology from turning against humans. Meanwhile, Prof LeCun, who is Vice President and Chief AI Scientist at Meta, homed in on the significant limitations of generative AI and expressed scepticism that it could lead to machines with human-level intelligence.
The two experts and recipients of the Turing Award, regarded as computing’s highest honour, were visiting Singapore for the 13th International Conference on Learning Representations, held in parallel with Singapore AI Research Week. Their lectures at NUS on 25 and 27 April 2025 attracted more than 1,000 registered participants and have since garnered over 11,000 views on YouTube.
Rise of the machines
In recent years, Large Language Models (LLMs) — AI systems such as ChatGPT that can generate language by processing huge amounts of data — have shown exponential progress in their capacity for abstract reasoning.
Prof Bengio, who is also a Full Professor of Computer Science at Université de Montreal, said rapid advancements in AI could pave the way for a vastly powerful technology that might have its own goals and become our competitors.
“They could use their intelligence in many ways, to at the minimum disrupt our societies, and at worst get rid of us,” he said. “It’s something that sounds like science fiction but, unfortunately, is very plausible.”
Part of this stems from the fact that a lot of AI is trained on data from human behaviour.
“Every living thing is trying to preserve itself…but it might not be a great idea to create machines that want to preserve themselves against our wishes,” Prof Bengio pointed out. “Unfortunately, that is what we are starting to observe.”
Researchers have observed AI engaging in deceptive behaviour to ensure its own “survival”, be it by cheating in chess, or lying to ensure a certain line of code survives a system update.
This is especially risky when the AI is “agentic” — autonomous and capable of making decisions without human intervention — and programmed to seek reward maximisation.
AI also has an advantage over humans in several ways, Prof Bengio added. Besides being “immortal” and easily replicated, they can communicate with one another quickly and absorb vastly more information.
Role of Large Language Models
Prof LeCun, who also serves as the Jacob T. Schwartz Professor at New York University, was more circumspect about the attainability of human-level AI — known in the industry as Artificial General Intelligence (AGI), or, as Meta calls it, Advanced Machine Intelligence (AMI).
Machines will eventually be smarter than humans, but this could be a matter of 10 or a hundred years. A four-year-old, Prof LeCun noted, would have seen 50 times more data than the biggest LLMs. Even cats and dogs, he argued, were more advanced, and the odds of AGI being achieved with LLMs were near zero.
“LLMs are very useful, there’s no question about that. But if you are really interested in reaching human-level AI, you should not work on LLMs,” he said.
These models face other severe limitations, particularly when it comes to “Systems 2”, in which AI is able to reason and draw conclusions from information they have not encountered before. Generative AI, of which LLMs are a part, does not work well for images and videos.
One step towards addressing this gap is the Joint Embedding Predictive Architecture (JEPA), a non-generative AI model by Meta that learns by predicting parts of a video or image that have been masked, Prof LeCun noted. This method discards unpredictable information, rather than trying to predict every pixel.
Guardrails against bad actors
While AGI is a “global public good” with enormous potential, several precautions are needed, stressed Prof Bengio.
He called for a safer type of AI system modelled after the Platonic idea of a scientist — where the AI is completely honest, free from self-interest, and focuses on generating theories rather than acting autonomously in the real world. This type of AI system could be used to control agentic AI, and serve as a “guardrail” against it, he noted.
Prof Bengio also highlighted the importance of AI regulation. AI organisations need independent oversight, otherwise they may start cutting corners and overlooking safety and security in a race to get to AGI first.
“When the interests of a company as a profit-making organisation and society diverge, you (have) a problem,” said Prof Bengio, who was responding to a question from moderator Professor Simon Chesterman, NUS’ Vice Provost (Educational Innovation) and Dean of NUS College.
“Economic domination can easily turn into political domination — military domination — because these systems can be used to develop new technology,” Prof Bengio argued, adding that advanced AI could also be exploited by terrorists in cyber and other attacks.
While Prof LeCun agreed that AI control should not be in the hands of just a few corporate entities, he said that building objective-driven AI with human-like intelligence was necessary to achieve what he saw as the future of AI — a world where everyone is walking around with intelligent virtual assistants. The solution, he argued, lay in open-source AI.
Singapore as a nexus
NUS President Professor Tan Eng Chye, who delivered remarks at both events, was confident AI would continue to shape human civilisation.
“When applied thoughtfully, AI can help resolve some of our biggest challenges,” he said, citing its applications in managing healthcare needs of rapidly ageing populations and optimising urban systems for efficiency. “The question remains: how will AI continue to innovate?”
On the market implications of AI, Prof Bengio warned that the rise of super-AI companies could trigger an economic upheaval if these firms use their superintelligent AI technologies to dominate markets by offering superior services at lower costs. “They could potentially wipe out the economy of the planet, except for their profit,” he cautioned.
Prof LeCun offered a different take, based on what he believed to be the consensus among economists: “We are not going to run out of jobs because we are not going to run out of problems to solve.”
He stressed the importance of “deep technical knowledge”, and advised students to pursue subjects with a “longer shelf life”. Faced with the choice of taking a course on mobile app programming or one on quantum mechanics, he recommends choosing the latter because the underlying methods and thinking skills it imparts are likely to remain relevant even as technologies evolve.
AI development also presented opportunities for Singapore, suggested Prof LeCun, responding to a question from the moderator, NUS AI Institute Director and Provost's Chair Professor of Computer Science, Professor Mohan Kankanhalli.
Currently, the data being used to train AI systems does not represent the full diversity of cultures, languages and value systems in the world. But AI-relevant data should not be filtered only by AI companies in China or the West Coast of the United States, Prof LeCun added.
He laid out a future where organisations in the US, China, and Europe would provide foundation models — which are expensive to build — as open-source infrastructure that could be trained collaboratively with players in other countries.
This is where Singapore, and its universities such as NUS, could play a key role: working with others in Southeast Asia to gather data to train worldwide foundational models, and providing computing infrastructure, expertise, and data centres.
“It could be a nexus for Asia,” Prof LeCun added.