In a newly released update to its AI principles, Google's parent company Alphabet has outlined how it plans to use AI in the future. Most notably, it has removed a promise not to use AI to build weapons, surveillance tools, or "technologies that could cause widespread harm."
Google's Red Lines on AI
The term Google’s AI red line first emerged in 2018, when employees protested the company’s AI project Maven, which was a collaboration with the US Department of Defense. At that time, more than 4,000 Google employees signed a petition to end the project and demand that the company never “build technology for war.”
Google then did not renew its contract to build AI tools for the Pentagon. The company also drew a red line, declaring that “non-pursuant applications” of AI, including weapons and technologies that collect or use information for surveillance that violate internationally accepted norms, would also not be used by Google. Technologies that cause or have the potential to cause public harm, or violate widely accepted principles of international law and human rights, would also not be used.
The decision to draw a red line on AI with weapons has kept Google from participating in military deals signed by other tech giants, including Amazon and Microsoft.
However, in the face of such huge changes in the AI race, Google has decided to withdraw its promise. This has caused a lot of controversy not only within Google but also shows an important shift of Silicon Valley technology companies in the defense industry.
Google is divided internally
According to Business Insider , the update on AI principles and ethics has sparked a strong backlash from Google employees. Employees have expressed their frustration on internal message boards. A meme showing CEO Sundar Pichai querying Google's search engine with the question "how to become a weapons contractor?" has received a lot of attention.
Another employee created a meme asking, “Are we the bad guys for lifting the ban on AI for weapons and surveillance?” The company has more than 180,000 employees. There may still be voices supporting Google’s decision to work more closely with the US government and its military and defense customers.
Google's reasoning
A Google spokesperson did not immediately respond to a request for comment on the withdrawal of the “AI promise.” However, the head of AI, Demis Hassabis, said the guidelines were evolving in a changing world and that AI would “protect national security.”
In a company blog post, Hassabis and James Manyika, Google's senior vice president of technology and society, said that as global competition for leadership in AI intensifies, Google believes AI should be guided by freedom, equality, and respect for human rights.
“We believe that companies, governments, and organizations share values and should work together to create AI that can protect people, drive global growth, and support national security,” they added.
Two Google executives said that billions of people use AI in their daily lives. Artificial intelligence has become a general-purpose technology, a platform that countless organizations and individuals use to build applications. AI has moved from a niche research topic in the lab to a technology as ubiquitous as mobile phones and the internet. So Google’s “AI oath” from 2018 needs to be updated accordingly.
Alphabet said it plans to spend $75 billion next year, largely to build AI capabilities and infrastructure.
Source: https://thanhnien.vn/google-rut-lai-loi-hua-khong-dung-ai-cho-quan-su-185250206161804981.htm
Comment (0)