Big tech company Meta is collaborating with industry partners to establish clear labeling methods for identifying AI content across platforms like Facebook, Instagram, and Threads. Users can expect labels on AI-generated images as Meta detects industry-standard indicators to ensure transparency about content origins. Meta has already been labeling photorealistic images from its own AI, Meta AI, since its launch, using the tag "Imagined with AI."
Recognizing the growing need for clarity amid the blurring lines between human and synthetic content, Meta is extending this labeling approach to content created with other companies' AI tools.
To achieve this, Meta is working on common technical standards with industry partners to identify AI-generated content. The "Imagined with AI" label relies on visible markers, invisible watermarks, and embedded metadata within image files.
Additionally, Meta is collaborating with other companies, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, to create tools that can identify invisible markers at scale.
While the focus has primarily been on images, Meta also acknowledges the challenges in detecting AI-generated audio and video content. Users will be provided with a disclosure and label tool to identify and label AI-generated video or audio. Penalties may apply for non-compliance.
“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI,” said Nick Clegg, President of Global Affairs.
To stay ahead in an increasingly adversarial space, Meta is exploring various options, including developing classifiers to detect AI-generated content without visible markers and implementing watermarking technologies like Stable Signature.
AI serves as both a sword and shield for Meta, as demonstrated by its use in enforcing community standards, detecting hate speech, and potentially expediting the removal of harmful content. The company is optimistic about the role of generative AI tools, such as Large Language Models, in enforcing policies more efficiently, especially during critical periods like elections.