The two companies will partner with the US AI Safety Institute, a unit of the National Institute of Standards and Technology (NIST), a US federal agency. This is seen as an important step in the management and oversight of AI technology, which has been a major concern since the launch of OpenAI's ChatGPT.
“This is just the beginning, but an important milestone in the effort to responsibly manage the future of AI,” said Elizabeth Kelly, director of the US AI Safety Institute.
Under the agreement, the US AI Safety Institute will provide feedback to both companies on potential safety improvements to their models, both before and after they are released to the public. The institute will also work closely with the UK AI Safety Institute during this process.
“The partnership with the US AI Safety Institute leverages their extensive expertise to rigorously test our models before widespread deployment. This enhances our ability to identify and mitigate risks, promoting responsible AI development,” said Jack Clark, co-founder and head of policy at Anthropic.
The move is part of an effort to implement a White House AI executive order due in 2023, which aims to create a legal framework for the rapid deployment of AI models in the US.
But while the federal government is pursuing a voluntary approach, lawmakers in California, America’s tech hub, passed a state-level AI safety bill on August 28. The bill, if signed by the governor, would impose stricter regulations on the AI industry.
Sam Altman, CEO of OpenAI, has expressed support for regulating AI at the national level rather than the state level, arguing that this would help avoid the risk of hindering research and innovation in the field of AI.
The move by leading tech companies and the US government shows a trend toward balancing innovation and safety in the rapidly evolving field of AI.
Source: https://nhandan.vn/openai-va-anthropic-chia-se-moi-nhat-ai-voi-chinh-phu-my-post827601.html
Comment (0)