A group of prominent international experts met in Beijing last week, where they identified “red lines” in AI development, including the creation of bioweapons and conducting cyberattacks.
In a statement days after the meeting, the scholars warned that a common approach to AI safety is needed to prevent “catastrophic or even existential risks to humanity in our lifetimes.”
“At the height of the Cold War, international scientific and government cooperation helped prevent nuclear catastrophe. Humanity must once again work together to prevent the catastrophe that could arise from unprecedented technology,” the statement said.
Experts at the International Dialogue on AI Safety in Beijing have identified “red lines” in AI development. Photo: FT
Signatories include Geoffrey Hinton and Yoshua Bengio, who are often described as the “fathers” of AI; Stuart Russell, a professor of computer science at the University of California; and Andrew Yao, one of China’s most prominent computer scientists.
The statement comes after the International AI Safety Dialogue in Beijing last week, a meeting attended by Chinese government officials to express their approval of the forum and its outcomes.
US President Joe Biden and Chinese President Xi Jinping met in November last year and discussed AI safety, agreeing to establish a dialogue on the issue. The world's leading AI companies have also met privately with Chinese AI experts in recent months.
In November 2023, 28 countries, including China, and leading AI companies agreed to a broad commitment to work together to address existential risks stemming from advanced AI at UK Chancellor Rishi Sunak’s AI safety summit.
In Beijing last week, experts discussed the threats associated with the development of “Artificial General Intelligence — AGI,” or AI systems that are equal to or superior to humans.
“The core focus of the discussion was the red lines that no powerful AI system should cross and that governments around the world should impose in the development and deployment of AI,” Bengio said.
These red lines would ensure that “no AI system can replicate or improve itself without explicit human approval and support” or “take actions that unduly increase its power and influence.”
The scientists added that no system would “significantly enhance the ability of actors to design weapons of mass destruction, violate the biological or chemical weapons convention” or be able to “automatically conduct cyberattacks that result in serious financial loss or equivalent harm.”
Hoang Hai (according to FT)
Source
Comment (0)