Nick Clegg, president of global affairs at Meta, said the company will use a set of markers built into the files. The company will apply labels to any content posted to its Facebook, Instagram, and Threads services, signaling to users that images (which may appear to be real photos) are actually digital creations generated by artificial intelligence. The company has also labeled content created using its own AI tools, according to Reuters.
Once the new system is up and running, Meta will do the same for images generated on services from OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet. The announcement provides the first news of a new standard that tech companies are developing to mitigate the harms of generative AI, which can create fake content from simple content.
AI-generated image labeling will help curb misinformation and scams
The approach builds on a pattern established over the past decade by the companies to coordinate removal of banned content on platforms such as depictions of mass violence and child exploitation.
Clegg believes companies can reliably label AI-generated images at this point, while noting that more sophisticated audio and video content labeling tools are still being developed.
In the immediate future, Meta will begin requiring users to label altered audio and video content and will impose penalties if they fail to do so. However, Clegg said there is currently no viable mechanism for labeling text generated by AI tools like ChatGPT.
Meta's independent oversight board has criticized the company's policy on misleadingly edited videos, saying content should be labeled rather than removed, so Clegg said Meta's new move could help better classify such content.
Source link
Comment (0)