Meta AI Chief Says Artificial Intelligence is Not a Nuclear Bomb

AsianFin – Yann LeCun, one of founding fathers in artificial intelligence (AI), asserted on Thursday that artificial intelligence is not a nuclear bomb.

Yann LeCun is a 2018 Turing Award winner and Meta’s chief AI scientist, previously Google's chief scientist, is well respected in the industry. Hinton, another pioneer in AI, had directly criticized LeCun, saying: "the harm of making AI large models open source is equivalent to making the atomic bomb open source."

LeCun told an AsianFin editor that Hinton was wrong and AI is not a "nuclear bomb."

"It (AI) is not a bomb; it is not meant to kill people. AI is to make them (humans) smarter. So, I don't understand this analogy at all. Moreover, AI is not that dangerous. I think these systems are much smarter than the actual situation. The current issue with AI is whether its future power will be sufficient, and whether you think it will also disrupt some things. We can reach systems with human intelligence, and we can consider how to make it safer," LeCun told AsianFin.

In response to whether he would still insist on open sourcing when AI's future power becomes strong enough to impact humanity, LeCun clearly said yes.

LeCun emphasized that open sourcing (AI technology) is important and very meaningful.

Also on Thursday, LeCun said in a post on X that large language models (LLMs) cannot achieve human intelligence. If you are a student interested in building the next generation of AI systems, he advised them not  to pursue a major related to large models.

"I’m working on the next generation of AI systems myself, not on LLMs. So technically, I’m telling you “compete with me”, or rather, 'work on the same thing as me, because that’s the way to go, and theore the merrier'," said LeCun.

LeCun, born in 1960, is a French computer scientist. He obtained an engineering degree from the École Supérieure d'Ingénieurs en Électronique et Électrotechnique in Paris and a Ph.D. in computer science from the University of Paris-Sorbonne in 1987. During his Ph.D. studies, he proposed the prototype of the backpropagation algorithm for neural networks. He did postdoctoral work at the University of Toronto under the guidance of Hinton.

LeCun has made significant contributions to machine learning, computer vision, mobile robotics, and computational neuroscience. His most famous work is the use of convolutional neural networks in optical character recognition and computer vision, earning him the title of the father of convolutional networks. He co-created the DjVu image compression technology and co-developed the Lush language.

With the rapid development of generative AI, Meta, the company where Yann LeCun works, has invested billions of dollars in developing multimodal models like Llama, aiming to catch up with other competitors such as Microsoft, OpenAI, and Google.

Currently, LeCun leads a team of about 500 people at Meta's Fundamental AI Research (Fair) lab. They are dedicated to creating the next generation of AI technology, which can develop common sense and learn how the world works in a manner similar to humans. This approach is known as "World Models."

LeCun and Hinton have had conflicting views on the future of AI. Recently, Hinton stated publicly that although ChatGPT will make AI research more efficient and impact the AI research process, in the long run, AI is developing too quickly and could surpass humans. Hinton believes that humans need to manage the risks brought by AI technology. Moreover, he thinks that models must actually perform a certain degree of reasoning, contrary to what many people say about large models lacking reasoning capabilities. As the scale of models increases, this reasoning ability will also become stronger. This is a direction worth pursuing with full effort.

LeCun, however, has disputed Hinton's view.

LeCun believes that large models are not the right direction of AI technology development. Generative AI products like ChatGPT will never achieve human-like reasoning and planning abilities. Instead, he believes that creating "super intelligence" in machines is the true path to artificial general intelligence (AGI).

But he also admits that this technological vision may take a decade to reach.

LeCun argues against relying on enhancing large models to create human-level intelligence because these models can only answer prompts accurately when given the correct training data, thus being "inherently unsafe."

"Don't study large models; these technologies are in the hands of big companies, and there's not much you can do. You should research the next generation of AI systems to overcome the limitations of large models," LeCun said in a conversation.

LeCun said that the evolution of large models is superficial and limited. These models only learn when human engineers intervene and train them based on this information, rather than naturally drawing conclusions like humans.

LeCun first published a paper on his "world modeling" vision in 2022. Since then, Meta has released two research models based on this approach. He said the lab is testing different ideas to achieve human-level intelligence because "there is a lot of uncertainty and exploration involved, so we can't determine which one will succeed or be ultimately adopted."

"We are at the point where we think we are on the cusp of maybe the next generation AI systems," LeCun pointed out.

LeCun believes the technology will power AI agents that users can interact with through wearable technology, including augmented reality or “smart” glasses, and electromyography (EMG) “bracelets”.

To make AI truly useful, it needs to possess human-level intelligence, LeCun noted.

(Interviewer: Zhao Hejuan; Author:  Lin Zhijia; Editor | Hu Runfeng)

想和千万钛媒体用户分享你的新奇观点和发现,点击这里投稿 。创业或融资寻求报道,点击这里


0 / 300