The model is capable of reasoning, solving complex mathematical problems and answering scientific research questions, which is considered an important step forward in the effort to develop artificial general intelligence (AGI) - machines with cognitive abilities like humans.
OpenAI said it was particularly “cautious” about how it would bring the o1 model to the public given its advanced capabilities. Photo: Getty Images
According to the Financial Times, OpenAI rated the risk of these new models as “medium” for issues related to chemical, biological, radiological and nuclear (CBRN) weapons, the highest the company has ever assessed. This means the technology has “significantly improved” the ability of experts to create biological weapons.
According to experts, AI software capable of detailed reasoning, especially in the hands of bad actors, can increase the risk of abuse.
Professor Yoshua Bengio, a world-leading AI scientist from the University of Montreal, has emphasized that this medium risk level increases the urgency for AI regulations, such as SB 1047, currently under debate in California. This bill requires AI manufacturers to take measures to reduce the risk of their models being misused to develop biological weapons.
According to The Verge, the security and safety of AI models has become a major concern as technology companies like Google, Meta, and Anthropic are racing to build and improve advanced AI systems.
These systems have the potential to provide great benefits in helping humans complete tasks and providing assistance in a variety of areas, but also pose safety and social responsibility challenges.
Cao Phong (according to FT, Reuters, The Verge)
Source: https://www.congluan.vn/openai-thua-nhan-mo-hinh-ai-moi-co-the-duoc-su-dung-de-tao-ra-vu-khi-biological-hoc-post312337.html
Comment (0)