Under the latest policy, Meta defines two types of AI systems that the company considers not releasing in the future.
The “Frontier AI Framework” document identifies two types of AI systems that Meta considers not releasing, “high risk” and “severe risk,” suggesting that parent company Facebook is more cautious about AI technology.
Previously, CEO Mark Zuckerberg pledged to make artificial general intelligence (AGI) commonplace.
According to Meta's definition, both “high risk” and “severe risk” systems can support cyber, biological, and chemical attacks, but the difference is that “severe risk” systems lead to “unmitigated catastrophic consequences.”
Meta suggests a number of scenarios, such as infiltrating a well-protected corporate environment automatically or spreading a high-impact biological weapon.
These are the “most pressing” scenarios that the firm believes could arise with the release of a powerful AI system.
The system classification is not based on any empirical testing but on input from internal and external experts.
If a system is determined to be “high risk,” Meta will restrict access to the system internally and not launch it until mitigation measures are in place to reduce the risk to a moderate level.
On the other hand, if a system is identified as “critical risk,” Meta will apply unspecified protections to prevent the system from being compromised and halt development until it becomes less dangerous.
According to TechCrunch, the Frontier AI Framework appears to be intended to assuage criticism aimed at Meta's open approach to system development.
Meta often makes its AI technology source code public, rather than closed like OpenAI. For the company, the open approach has both advantages and disadvantages. Its Llama AI model has been downloaded hundreds of millions of times, but Llama has also been used to develop a hostile chatbot against the US.
By announcing the Frontier AI Framework, Meta may be targeting DeepSeek, the most prominent Chinese AI startup today. DeepSeek also pursues an open-source AI strategy but lacks many safeguards and is easily modified to produce malicious output.
According to Meta, when weighing the pros and cons in making decisions about how to develop and apply advanced AI, the technology can serve society in a way that both maintains benefits and maintains an appropriate level of risk.
(According to TechCrunch)
Source: https://vietnamnet.vn/meta-co-the-dung-phat-trien-cac-he-thong-ai-qua-rui-ro-2368745.html
Comment (0)