In a world where AI-generated images are increasingly indistinguishable from human photographs, Meta has updated its ‘Imagine with Meta AI’ tool.
According to their blog, this update introduces invisible watermarks to all images generated by the tool, marking a step in ensuring transparency and traceability in AI-generated content.
The invisible watermarking feature by Meta AI is a response to the growing concerns over the potential misuse of AI technologies in creating deceptive or misleading content. Unlike conventional watermarks, these are invisible to the human eye but can be detected using a corresponding model. This approach is designed to be resilient to common image manipulations, including cropping, color brightness or contrast changes, and screenshots.
Meta also launched a standalone product called 'Image with Meta AI'.
It's a free image generator — with invisible watermarks to increase transparency on AI images coming soon.
Try here: https://t.co/4S37p2Hy2j pic.twitter.com/U8pteh6Pc9
— Rowan Cheung (@rowancheung) December 7, 2023
Meta’s decision to incorporate this feature into their AI-generated images aims to establish a new standard in the industry, ensuring that the origins of AI-created content are traceable and transparent. The watermark is applied using a deep-learning model, a testament to Meta’s push to leverage technology to enhance digital safety and integrity.
The evolving landscape of Meta A
Meta AI, known for its ability to generate photorealistic images and provide detailed responses to a wide range of requests, is not limited to watermarking. The AI platform has expanded its capabilities. The ‘reimagine’ feature, introduced on Facebook Messenger and Instagram, allows users to send and receive AI-generated images, enhancing the social media experience with a creative twist.
Meta AI also plays a role in enhancing the user experience across Facebook and Instagram. From offering AI-generated post comment suggestions and search results to facilitating creative applications in Shops, Meta AI is becoming a part of Meta’s ecosystem.
A stand against AI misuse
The move to include invisible watermarks in AI-generated images is part of a broader effort by Meta to address the ethical challenges posed by AI technology. Recently, AI-powered scam campaigns have surged, with scammers using readily available tools to create fake videos, audio, and images of public figures. This has resulted in significant misinformation and even temporary market upheavals, as witnessed in the incident involving a fake image of an explosion near the Pentagon.
Meta’s move is a proactive measure to mitigate such risks, ensuring that AI-generated content can be easily identified and differentiated from human-generated material. This approach promotes the responsible use of AI and helps safeguard public trust in digital content.
Meta’s ongoing efforts in AI safety
In addition to the invisible watermarking, Meta is investing in red teaming as part of its foray into AI safety. Red teaming involves pressure testing generative AI research and features to identify potential risks in the outputs. Introducing the Multi-round Automatic Red-Teaming (MART) framework is a step in this direction, aiming to improve the safety of AI applications continuously.
New AI research paper from Meta — MART, or Multi-round Automatic Red-Teaming is a framework for improving LLM safety that trains an adversarial and target LLM through automatic iterative adversarial red-teaming.
Details in the paper ➡️ https://t.co/lGuJrPDsCD pic.twitter.com/KNfWqx8JTk
— AI at Meta (@AIatMeta) November 15, 2023
Meta’s development aims to add a layer of transparency and traceability to AI-generated content, marking a shift in how such content is managed and authenticated. The invisible watermark, undetectable to the naked eye but identifiable through specific models, is designed to help differentiate AI-created images from those produced by humans. This move by Meta reflects an effort to address some ethical concerns associated with AI technology and its potential for misuse in creating misleading content.