We are living in the era of the 4.0 technology revolution, where artificial intelligence (AI) is gradually becoming an indispensable part of many areas of social life. The media, as a bridge of information between the public and events, cannot be outside of that trend.
Ms. Nguyen Thi Hai Van - Director of the Center for Journalism Training, Vietnam Journalists Association at the Workshop on AI technology. (Source: Vietnam Journalists Association) |
To make the most of the advantages that AI brings, communicators need to equip themselves with the knowledge to be able to use AI effectively, while ensuring reliability and ethics in information transmission.
From the “heat” of AI
It is clear that AI (Artificial Intelligence) is one of the hottest keywords today. In September 2024, searching on Google Search for the keyword “AI” in 0.3 seconds recorded 15 billion 900 million results; with the keyword “AI tools”, in 0.4 seconds recorded more than 3 billion 400 million results. These huge numbers show the coverage and interest in AI and AI-based tools globally.
At present, there are more and more AI tools appearing for various fields, including the media industry. Besides the widely known ChatGPT, there are many AI applications developed in a specialized direction, serving specialized tasks. It is not difficult to list many tools here, for example: Bing AI, Clause, Zapier Central for Chatbot task group; Jasper, Copy.ai, Anyword for content creation tasks; Descript, Wondershare, Runway for video production and editing tasks; DALL-E3, Midjourney, Stable Diffusion for image creation tasks; Murf, AIVA for audio content tasks, etc. and recently, the giant Amazon also introduced the AI tools they developed, Video Generator and Image generator, with the aim of "inspiring creativity and bringing more value".
Although AI tools vary widely in scale and level of specialization, the technology essentially has two things in common: AI tools are developed based on algorithms and data to "train" the AI tool.
Ethical Control of AI Use in Media
The benefits that AI tools bring are undeniable, and with the rapid pace of technology updates, there will be more and more specialized AI tools in every corner, responding to tasks from simple to complex in the media industry. Along with this overwhelming development, many questions arise regarding the issue of ethical control in the development and use of AI tools in the media industry? What will happen if the algorithms and data of that AI tool are manipulated in a way that is harmful to the community? Who guarantees the intellectual property rights for the input data that the AI tool uses for training? Who is the one who assesses the level of harm they cause?
Is there inequality between those who have access to and those who do not have access to AI tools for the same task? There have even been questions raised about the potential for uncontrolled harm from AI tools, especially in sensitive areas that can impact many people on a large scale, such as media and social networks.
Recognizing the above concerns, many organizations, associations, governments and even companies and corporations developing AI tools have issued recommendations, explanations, and even Codes of Conduct related to the issue of ethical control in AI technology. Adopted by 193 countries in 2021, the Declaration on Ethics of Artificial Intelligence - The Recommendation of UNESCO - the United Nations Educational, Scientific and Cultural Organization, clearly states that "The rapid rise of artificial intelligence (AI) has created many opportunities globally, from assisting in healthcare diagnosis to enabling human connection through social media and creating labor efficiency through automated tasks.
However, these rapid changes also raise profound ethical concerns. Such risks associated with AI have begun to compound existing inequalities, leading to further harm to already disadvantaged groups…”. And from there “request UNESCO to develop tools to support Member States, including the Readiness Assessment Methodology, a tool for governments to build a comprehensive picture of their readiness to deploy AI ethically and responsibly for all their citizens.
In its global approach, UNESCO has launched the Global AI Ethics and Governance Observatory, which it says “provides information on countries’ readiness to adopt AI ethically and responsibly. It also hosts the AI Ethics and Governance Lab, which brings together contributions, impactful research, toolkits, and positive practices on a wide range of AI ethics issues…”
In addition to global organizations such as UNESCO, many professional associations are also working to develop their own codes of conduct, for example, the IABC - International Association of Business Communications, an association with thousands of members from all over the world, has developed a set of principles guiding the ethical use of AI by communications professionals, which aims to guide IABC members on the relevance of the IABC Code of Ethics to AI. These guidelines may be updated and supplemented over time as AI technology develops. In this set of principles, there are many specific points that a communications professional should follow, such as:
“AI resources used must be human-driven to create positive and transparent experiences that foster respect and build trust in the media profession. It is important to stay informed about the professional opportunities and risks that AI tools present. It is important to communicate information accurately, objectively, and fairly. AI tools can be subject to many errors, inconsistencies, and other technical issues. This requires human judgment to independently verify that my AI-generated content is accurate, transparent, and plagiarized.
Protect the personal and/or confidential information of others and will not use this information without their permission. Evaluate its AI outputs based on human engagement and understanding of the community it is aiming to serve. Eliminate bias to the best of its ability and be sensitive to the cultural values and beliefs of others.
Must independently fact-check and verify their own outputs with the necessary professional rigor to ensure that third-party documentation, information or references are accurate, have the necessary attribution and verification, and are properly licensed or authorized for use. Do not attempt to conceal or disguise the use of AI in their professional output. Acknowledge the open source nature of AI and the issues related to confidentiality, including the entry of false, misleading or deceptive information...
For companies and corporations that own, develop and trade AI tools, they are the ones who understand the ins and outs of AI tools better than anyone else, they know the underlying algorithms on which AI tools operate, and the data from which AI tools are trained. Therefore, these companies also need to provide information related to the principles of Ethics in AI development. In fact, there are companies that are interested in this issue.
Google is committed to not developing AI tools for areas where there is a significant risk of harm, and we will only do so when we believe the benefits significantly outweigh the risks and incorporate appropriate safeguards. Weapons or other technologies whose primary purpose or deployment is to cause or directly facilitate human injury. Technologies that collect or use information for surveillance that violate internationally accepted norms. Technologies whose purpose violates generally accepted principles of international law and human rights.
On the security and safety front, Google pledges: “We will continue to develop and implement robust security and safety measures to avoid unintended outcomes that create risks of harm. We will design our AI systems to be appropriately cautious and seek to develop them in accordance with best practices in AI safety research. We will incorporate our privacy principles into the development and use of our AI technologies. We will provide opportunities for notice and consent, encourage privacy-protective architectures, and provide appropriate transparency and controls over data use.”
Similar to Google, Microsoft also issued a statement on the AI Principles and Approach, emphasizing: "We are committed to ensuring that AI systems are developed responsibly and in a way that ensures people's trust...". In addition, in one way or another, large technology companies that are investing heavily in developing AI tools such as Amazon and OpenAI also made their own commitments.
Many examples in history have proven the duality of technology, with positive factors and negative factors. After all, although it is a very “high-tech” technology platform, AI is still based on algorithms and data developed and collected by humans; at the same time, most AI products are part of the business plans of the companies that own them.
Therefore, there are always potential risks from the technical side as well as from the product development and management team. The problem here is the scale of impact that AI tools can have on the majority, even on the socio-economic level of a community. The timely attention to ethical control when using AI technology is a welcome sign with the participation of large-scale international organizations such as the United Nations, or governments, then to industry associations and most of all, right at the technology development units.
However, just as the list of AI tools continuously releases new versions, each one more sophisticated and complex than the previous one, the Codes, Principles or Guidelines also need to be updated and supplemented in a timely manner, and moreover, need to be proactive to prevent, limit and control product development units and users within a framework where the ability to comply with Ethical Control in AI technology in general and media workers in particular can achieve the highest efficiency.
Source
Comment (0)