ChatGPT is being sued.
Recently, two American authors sued OpenAI in a San Francisco federal court, alleging that the company used their work to "train" the popular artificial intelligence system ChatGPT.
Authors Paul Tremblay and Mona Awad of Massachusetts allege that ChatGPT exploited data copied from thousands of books without permission, violating authors' copyrights. Their lawsuit argues that ChatGPT created "highly accurate summaries" of their work without author permission, constituting copyright infringement.
The Guardian quoted Andres Guadamuz, a researcher in intellectual property law at the University of Sussex, as saying that this is the first lawsuit concerning intellectual property rights against ChatGPT. Guadamuz believes that this lawsuit will expose the unclear "legal boundaries" in the use of innovative AI applications today.
OpenAI is being sued for copyright infringement in AI training.
In the field of journalism, there have been numerous questions about both the opportunities and challenges, as well as the anger and impact of artificial intelligence on journalism in general and on the jobs of journalists in particular.
ChatGPT can generate highly complex text content from simple user commands, producing anything from essays and job applications to poems and even fictional stories. ChatGPT is a large-scale language model, trained by uploading billions of everyday words to the internet. From this data, it infers sentences and words based on specific sequences.
However, the accuracy of the answers is questionable. Scholars in Australia have found examples showing that the system fabricates references from websites and quotes false statements. The use of artificial intelligence in journalism is also highly controversial.
The technology news website CNET uses AI to generate articles, which are then proofread by editors before publication. The site acknowledges that the program has limitations, after an article on the technology news site Futurism revealed that more than half of the articles generated using AI tools required editing for errors. On one occasion, CNET was forced to issue corrections to an article containing too many simple mistakes.
But the potential for AI to generate misinformation isn't the only concern. There are many other legal and ethical issues to consider, including intellectual property (IP) rights, content moderation, and even the potential disruption to existing news organizations' financial models.
Who owns intellectual property and content distribution rights?
According to Mr. Le Quoc Minh - Member of the Central Committee of the Communist Party of Vietnam, Editor-in-Chief of Nhan Dan Newspaper, Deputy Head of the Central Propaganda Department, and President of the Vietnam Journalists Association , if newsrooms begin integrating AI to produce content, a crucial question arises: Who owns the intellectual property and the rights to publish the content? Does the news agency dictate to the AI platform, or does the AI platform itself?
Mr. Le Quoc Minh cited that, unlike in the US, British law allows for the protection of rights to computer-generated works, although only individuals or organizations have the right to "own" intellectual property, never AI. Specifically, this means that if an AI system makes minimal contributions beyond basic user commands, and the automated decision-making process has driven the creative process, then the creator of the platform can be considered the "author" and owner of the intellectual product.
Editor-in-Chief Gideon Lichfield stated that they will not publish content written or edited by AI, nor will they use images or videos created by AI.
However, if a large amount of input data is needed through uploading documents to the system, and AI is merely a supporting tool, then intellectual property rights to the output product may belong to the user. In fact, if journalists use AI, they need to carefully check the terms of service of the platforms to cautiously assess intellectual property regulations. Some platforms “grant” intellectual property rights to users, while others may retain these rights and grant them under a “license” (possibly with restrictions on use by news organizations).
"Regardless of who owns the intellectual property rights, news organizations must be prepared to take responsibility for all AI-generated content they publish – including the possibility that the content may be considered defamatory or misleading," Mr. Minh said.
The editor-in-chief of Nhan Dan Newspaper added that, to date, many AI tools do not "publish" answers to anyone other than the user themselves; anyone using these technologies is responsible for the content they publish. The biggest risk for newsrooms publishing AI-generated works is the accidental infringement of third-party intellectual property rights. Journalists cannot know which images or text were used to train the AI, or which were used to create content on demand.
" Newspapers must accept the reality that content 'seemingly original' created by AI can be heavily influenced by – or directly copied from – unauthorized third-party sources," Mr. Le Quoc Minh emphasized.
Mr. Minh also noted that the terms of service of AI platforms do not guarantee that the results will not infringe copyright, and thus news organizations will have no legal basis if sued by authors. For example, the image hosting company Getty Images has begun legal proceedings against Stability AI - the parent company of the image creation tool Stable Diffusion - on the grounds of "unauthorized copying and processing of millions of copyrighted images owned or represented by Getty Images."
"Even if Stability AI avoids the copyright lawsuit, it will still be considered to have violated Getty Images' terms of service, which explicitly prohibit 'any data mining, robotics, or similar data collection methods.' News organizations deemed to have used AI to interfere with Getty Images' content without permission could also be sued," Minh said.
In a positive development, the technology news site Wired recently became the first news outlet to publish official regulations on AI, outlining how they intend to use the technology.
The regulations posted by Editor-in-Chief Gideon Lichfield in early March outline a series of commitments regarding what the newsroom will not do. For example, they will not publish content written or edited by AI, nor will they use AI-generated images or videos. Instead, they will only use AI to generate ideas for articles, suggest catchy headlines, or create effective social media content. This can be considered a positive and necessary measure, given the current controversy surrounding the legal and ethical aspects of AI in journalism.
Hoa Giang
Source






Comment (0)