SGGP
After Europe started drafting and soon launched the world's first law on artificial intelligence (AI), the four leading US internet data mining "giants" have just launched a group called Frontier Model Forum.
The core goal of the Frontier Model Forum is to promote responsible AI. Photo: THINKWITHNICHE |
The parties come together
The group, which includes Anthropic, Google, Microsoft, and OpenAI, the owner of ChatGPT, said the core goals of the Frontier Model Forum are to help reduce the risks posed by AI platforms and develop industry standards. The Frontier Model Forum will create a new forum with the following main goals: Promote AI safety research to support development, risk mitigation, and standardization of safety assessments and standards; identify best practices for model development and deployment; help the public understand the nature, capabilities, limitations, and impacts of the technology; and collaborate with policymakers, academics, civil society, and companies to share knowledge about risks and safety.
Tech giants have traditionally resisted increased regulation, but this time around, the tech industry has largely changed its stance on the risks posed by AI. The companies involved in the Frontier Model Forum initiative have pledged to share best practices among companies and make information publicly available to governments and civil society.
Specifically, the Frontier Model Forum will focus on large-scale, yet nascent, machine learning platforms that take AI to new levels of sophistication – which is also a potential risk factor. In addition, the Frontier Model Forum will also support the development of applications that address major societal challenges such as: climate change mitigation and adaptation, early cancer detection and prevention, and countering cyber threats.
Room for potential destruction
The announcement of the Frontier Model Forum by US companies comes a few days after US senators on July 25 warned about the risk of AI being used for bio-attacks at a hearing before a subcommittee of the US Senate Judiciary Committee. Recently, seven major technology companies, including OpenAI, Alphabet and Meta, have voluntarily committed to the White House to develop a system to label all content from text, images, audio to video created by AI to provide users with information transparency, to ensure this technology is safer.
While AI is not yet capable of creating biological weapons, it is a “medium-term” risk that could pose a serious threat to US national security in the next two to three years, noted Dario Amodei, CEO of AI Anthropic. That’s because the actors who carry out large-scale biological attacks are no longer limited to those with specialized knowledge in the subject.
The world is facing an explosion of content created by generative AI technology. From the US to Europe to Asia, regulatory moves are gradually tightening. According to CNN, starting in September, the US Senate will hold a series of nine additional workshops for members to learn about how AI can affect jobs, national security and intellectual property. Before the US, if an agreement is reached, the European Union will have the world's first law on AI management next year, expected to take effect in 2026.
The benefits that AI brings to society are undeniable, but there are growing warnings about the potential risks of this new technology. In June, UN Secretary-General Antonio Guterres supported a proposal from several AI executives to establish an international AI watchdog, similar to the International Atomic Energy Agency (IAEA).
Back at the Frontier Model Forum, Microsoft President Brad Smith said the initiative is an important step in bringing technology companies together to promote responsible use of AI for the benefit of all humanity. Similarly, OpenAI Vice President of Global Affairs Anna Makanju said advanced AI technology has the potential to bring profound benefits to society, and this requires oversight and governance.
Source
Comment (0)