ChatGPT sued
Recently, two American authors sued OpenAI in San Francisco federal court, claiming that the company used their works to “train” the popular artificial intelligence system ChatGPT.
Massachusetts writers Paul Tremblay and Mona Awad say ChatGPT has mined data copied from thousands of books without permission, infringing on the authors' copyrights. Their lawsuit argues that ChatGPT has created "highly accurate summaries" of their works without the authors' permission, which is a copyright violation.
The Guardian quoted Andres Guadamuz, who studies intellectual property law at the University of Sussex, as saying that this is the first lawsuit related to intellectual property rights for ChatGPT. Accordingly, Mr. Guadamuz said that this lawsuit will reveal the unclear "legal boundaries" in the process of using innovative AI applications today.
OpenAI sued for copyright infringement in AI training.
In the field of journalism, there have been a series of questions about both the opportunities and challenges, and also the anger and impacts of artificial intelligence on journalism in general and on the job positions of journalists.
ChatGPT can generate highly complex text content from simple user commands, generating anything from essays to job applications to poems to fictional stories. ChatGPT is a large language model, trained by uploading billions of words of everyday life to the system from the Internet. From there, it infers sentences and words from certain sequences.
However, the accuracy of the answers has been questioned. Scholars in Australia have found examples of the system forging references from websites and then citing fake quotes. The use of artificial intelligence in journalism has also been controversial.
Technology news site CNET uses AI to generate articles that are then checked for errors by human editors before publication. The site acknowledged that the program has limitations, after a story on tech news site Futurism revealed that more than half of the articles generated using AI tools had to be edited for errors. In one case, CNET was forced to issue a correction to an article that contained too many simple errors.
But the potential for AI to create misinformation isn’t the only concern. There are also a host of legal and ethical issues to consider, including intellectual property (IP) ownership, content moderation, and the potential disruption of newsrooms’ current financial models.
Who owns the intellectual property and content publishing rights?
According to Mr. Le Quoc Minh - Member of the Party Central Committee, Editor-in-Chief of Nhan Dan Newspaper, Deputy Head of the Central Propaganda Department, Chairman of the Vietnam Journalists Association, if newsrooms start integrating AI to produce content, there is an important question: Who owns the intellectual property and the rights to publish content? Does the press agency command the AI platform or is it the AI platform itself?
Le Quoc Minh cited that, unlike in the US, UK law allows copyright protection for computer-generated works, although only individuals or organizations have the right to “own” intellectual property, never AI. Specifically, this means that if an AI system has made minimal contributions beyond basic user commands, and the automated decision-making process has driven the creative process, then the creator of the platform can be considered the “author” and owner of the intellectual product.
Editor-in-Chief Gideon Lichfield said they will not publish content written or edited by AI, and will not use AI-generated images or videos.
If more input is required through uploading documents to the system, and AI is just a supporting tool, then the intellectual property of the output may belong to the user. In fact, if journalists use AI, they need to carefully check the terms of service of the platforms to carefully assess the intellectual property regulations. Some platforms “grant” intellectual property rights to users, while others may retain this right and grant it under a “license” (possibly with restrictions on editorial use).
“Regardless of who owns the intellectual property, newsrooms must be prepared to take responsibility for any AI-generated content they publish – including the possibility that the content is deemed defamatory or misleading,” Minh said.
The editor-in-chief of Nhan Dan Newspaper added that so far, many AI tools do not “publish” answers to anyone other than the users themselves, and anyone using these technologies is responsible for the content they post. The biggest risk for newsrooms publishing AI-generated works is accidental infringement of third-party intellectual property rights. Journalists cannot know which images or text are used to train AI, or which are pulled in to create content on demand.
“ Newsrooms must accept the fact that seemingly original AI-generated content may be heavily influenced by — or directly copied from — third-party sources without permission,” Minh stressed.
Minh also noted that the terms of service of AI platforms do not guarantee that the results will not infringe copyright, and thus, newsrooms will have no legal basis if sued by authors. For example, photo hosting company Getty Images has initiated proceedings to sue Stability AI - the parent company that makes the image creation tool Stable Diffusion - for "illegally copying and processing millions of copyright-protected photos owned or represented by Getty Images.
“Even if Stability AI avoids a copyright lawsuit, it will still be found to have violated Getty Images’ terms of service, which prohibits “any data mining, robotics, or similar data collection methods. Media outlets that are found to be using AI to interfere with Getty Images content without permission could also be sued,” Minh said.
In a positive development, technology news site Wired recently became the first news outlet to publish official regulations on AI, outlining how they plan to use the technology.
The regulations, published by Editor-in-Chief Gideon Lichfield in early March, make a series of commitments about what the newsroom will not do, such as not publishing content written or edited by AI, not using images or videos generated by AI, but only using AI to get ideas for articles, or to suggest attractive headlines, or content to post on social networks effectively. This can be considered a positive and necessary measure in the context of AI causing a lot of controversy surrounding legal and ethical issues in current journalism activities.
Hoa Giang
Source
Comment (0)