SGGP
In an effort to tighten AI regulation, key European Union (EU) lawmakers have agreed on several amendments to a draft artificial intelligence (AI) regulation to curb generative AI, such as ChatGPT.
Panasonic Connect Corporation has helped employees in Japan use AI systems to improve productivity. Photo: Nikkei Asia |
Expecting the first complete law
The vote on the AI regulation bill on May 11 marked a new step in the process of passing formal legislation governing AI tools like ChatGPT. The European Parliament’s consumer protection and civil liberties committees approved the draft text, which affirms the need to regulate the use of AI in the EU, while promoting innovation in the field, but respecting fundamental rights, according to which AI must serve people, society and the environment.
After two years of discussion, the AI Act is expected to become the first complete law in the EU to regulate this technology, because it has added provisions prohibiting the use of facial recognition technology in public places (which is predicted to cause conflicts between EU countries) and tools using algorithms to predict criminal behavior, artificial AI applications such as OpenAI's ChatGPT, biometric checks... Accordingly, these applications must send notifications to remind users that the products of these tools are created by machines, not humans.
The document also includes a section that calls for additional criteria to identify high-risk areas for AI applications, thereby limiting the scope of tool design. AI tools will be classified according to the level of risk each tool can pose. Governments and companies using these tools will be subject to different obligations depending on the level of risk.
The draft text will be submitted to the full EP for approval next month before being sent to EU member states for review and finalization. While the list proposed by the European Commission (EC) already includes use cases for AI in critical infrastructure, education, human resources, public order and immigration, MEPs also want to add thresholds to delineate threats to security, health and fundamental rights.
Japan will take the lead
Many countries are also looking for solutions to the problem of both preventing domestic industries from falling behind and addressing citizens' privacy concerns.
In Asia, the Japanese government’s first-ever Artificial Intelligence Strategy Council was convened to establish a framework to guide the development of AI. Speaking to the council, Prime Minister Fumio Kishida said: “AI has the potential to change our economic society in a positive way, but it also has risks. It is important to address both issues appropriately.”
The use of AI technology will contribute to enhancing industrial competitiveness and solving problems for the whole society, but AI must be used reasonably and risks for users must be minimized. However, up to now, discussions have mainly focused on technical aspects. Japanese experts urge that in the coming time, discussions must be conducted on the basis of a broader perspective, with the participation of fields such as business and law. Nikkei Asia said that one challenge Japan faces is how to improve the level of domestic AI development while focusing on regulating the general use of AI, in which security, privacy and copyright are important issues.
AI is starting to disrupt everyday life as fake images and videos and robot-generated text are popping up, raising concerns ranging from national security to misinformation. Digital and technology ministers from the Group of Seven (G7) have agreed to compile guidelines on the development and use of general AI by the end of this year. With Japan taking over the G7 presidency in 2023, Prime Minister Kishida has said Japan will take the lead in formulating international rules to make the most of the promise and deal with the risks of artificial AI.
Like Japan, the White House announced last week that it would invest $140 million to establish seven AI research centers and publish guidelines on the use of this advanced technology with the goal of creating rules that minimize risks but do not hinder the development of AI-based innovations. Speaking to the Council of Advisors on Science and Technology, US President Joe Biden emphasized that AI can help deal with some very difficult challenges such as disease and climate change, but we must also address potential risks to society, the economy and national security. Technology companies have a responsibility to ensure their products are safe before they hit the market.
Source
Comment (0)