Expose Bengio's new trends: world model + mathematical proof to ensure the normal operation of the AI system
Yoshua Bengio, one of the three giants of deep learning, has made public the next move, about AI security-
Joined a project called Safeguarded AI (Protected Artificial Intelligence) as Scientific Director.
According to reports, Safeguarded AI aims to:
By combining scientific world models and mathematical proofs, we build an AI system that is responsible for understanding and mitigating the risks of other AI agents.
The main focus is a quantitative security guarantee.
The project is supported by the United Kingdom Advanced Research and Invention Agency (ARIA), and it is said that ARIA will invest a total of 59 million pounds (about RMB537 million) in the future.
Bengio said:
If you're planning to deploy a technology, given the potentially serious consequences of abnormal AI behavior or misuse, you'll need to make a good reason, preferably with strong mathematical assurances, to ensure that your AI system will function properly.
The Safeguarded AI project is divided into three technology areas, each with specific goals and budgets:
Officials said that Bengio will pay special attention to TA3 and TA2 when he joins, providing scientific strategic advice throughout the program.
ARIA also plans to invest £18 million (about RMB164 million) to set up a non-profit organization to lead the research and development of TA2.
The Safeguarded AI project director is David "davidad" Dalrymple, a former senior software engineer at Twitter, who joined ARIA last September.
Regarding Bengio's arrival, Dalrymple also uploaded a photo of the two on X (formerly Twitter):
David "davidad" Dalrymple, Yoshua Bengio, et al. wrote a paper on "building an AI system responsible for understanding and mitigating the risks of other AI agents".
It proposes a set of models called "Guaranteed Safe AI", which mainly quantifies the safety assurance of AI systems through the interaction of three cores:
They also assigned L0-L5 security levels to the strategy for creating a model of the world:
"AI risk" has always been one of the focus topics of attention of industry leaders.
Hinton left Google to freely discuss the risks of AI.
Previously, there were large-scale scenes of AI giants such as Ng Enda, Hinton, LeCun, and Hassabis "spraying" online.
Andrew Ng once said:
The biggest concern about AI is that AI risks are being over-hyped, leading to open source and innovation being stifled by strict regulations.
Some people spread the fear of AI exterminating humanity just to make money.
DeepMind CEO Hassabis said:
This is not intimidation. The risks of AGI can be serious if they don't start talking about it now.
I don't think we're going to want to start taking precautions before danger breaks out.
Bengio also published an open letter with Hinton, Yao Qizhi, Zhang Yaqin and other AI experts (Managing AI RIsks in an Era of Rapid Progress).
It argues that humanity must take seriously the possibility of AGI surpassing human capabilities in many key areas in the next decade or the next decade. It is recommended that regulators should have a comprehensive understanding of AI developments, especially those large models trained on multibillion-dollar supercomputers.
Just a month ago, Bengio also wrote an article titled "Reasoning through arguments against taking AI safety seriously", in which he shared his latest thoughts that interested families can be healthy~
This article is from Xinzhi self-media and does not represent the views and positions of Business Xinzhi.If there is any suspicion of infringement, please contact the administrator of the Business News Platform.Contact: system@shangyexinzhi.com