The news media industry, in particular, is grappling with some deep and complex questions about what Generative Artificial Intelligence (GenAI) means for journalism. While there is little risk that AI will replace journalists entirely, there are still concerns, such as around information accuracy, plagiarism, and data privacy.
Photo: Internet
To get a sense of where things stand in the industry, the World Association of Newspapers (WAN-IFRA) surveyed the global journalism community of journalists, editors, and other news professionals in late April and early May about how organizations are using GenAI tools.
Half of newsrooms are already working with GenAI
What's notable is that nearly half (49%) of survey respondents said their newsrooms are using AI tools.
Overall, attitudes towards Creative AI in the industry are overwhelmingly positive: 70% of survey respondents said they expect Creative AI tools to be useful to journalists and newspapers. Only 2% said they don’t see any value in the short term, while another 10% are unsure. 18% said the technology needs to develop further to be truly useful.
Content summarization is the most popular tool
While there has been some somewhat alarmist reaction to ChatGPT, questioning whether the technology could replace journalists, the number of newsrooms using GenAI tools for reporting is relatively low. Instead, most AI tools are used to aggregate and summarize information. Other important AI tasks include simplified research/search, text editing, and workflow improvements.
However, in the future, the use of AI may become more widespread as more newsrooms look to adopt new technologies and integrate them into their operations. Respondents said they expect to see more personalization, translation, and workflow improvements from AI in the future.
Very few newsrooms are trained to use GenAI.
There are many different approaches to how to control the use of GenAI tools in newsrooms. Currently, publishers largely take a hands-off approach: nearly half of survey respondents (49%) said their journalists have the freedom to use the technology as they see fit. 29% said they do not use GenAI.
Only one-fifth of respondents (20%) said they had editorial guidance on when and how to use GenAI tools, while 3% said use of the technology was not permitted at their workplace.
Misinformation and plagiarism
There have also been cases where news outlets have published content generated by AI tools and then discovered that the information was incorrect or inaccurate, with 85% of survey respondents highlighting this as a specific issue they have encountered with GenAI.
Another concern is issues related to plagiarism and copyright infringement, followed by privacy and data protection issues. What is needed now is the development of AI policies from regulators and news organizations, along with staff training and open communication about the responsible use of GenAI tools.
Hoang Ton (according to WAN-IFRA)
Source
Comment (0)