April 19, 2024
Meta's New Shield: Invisible Watermarks to Secure AI-Generated Images
AI

Meta’s New Shield: Invisible Watermarks to Secure AI-Generated Images

In a move to enhance security and combat the potential misuse of artificial intelligence (AI) technology, Meta, formerly known as Facebook, has announced the incorporation of invisible watermarking in all images created using its AI. The measure is part of Meta’s effort to prevent bad actors from exploiting AI-generated content to deceive the public.

The company detailed the upcoming addition of invisible watermarks in a report on December 6, specifically focusing on updates for Meta AI, the company’s virtual assistant. Meta AI, like other AI chatbots, generates images and content based on user prompts. The new watermark feature aims to make it more challenging for creators to remove or manipulate the watermark, providing increased transparency and traceability.

“In the coming weeks, we’ll add invisible watermarking to the image with Meta AI experience for increased transparency and traceability,” Meta announced.

Unlike traditional watermarks, Meta claims that its AI watermarks, known as “Imagine with Meta AI,” are resilient to common image manipulations such as cropping, colour changes, screenshots, and more. The company plans to utilize a deep-learning model to apply these invisible watermarks to AI-generated images, making them imperceptible to the human eye but detectable with a corresponding model.

Initially, the watermarking service will be integrated into images created through the Meta AI experience, but Meta plans to extend the feature to other services within the Meta ecosystem that leverage AI-generated images.

The latest update to Meta AI also introduces the “reimagine” feature for Facebook Messenger and Instagram, allowing users to send and receive AI-generated images. Both messaging services will adopt the invisible watermark feature to bolster the security of shared content.

The move comes amid concerns about the misuse of AI-generated content for deceptive purposes. Various AI tools, including Dall-E and Midjourney, have traditionally allowed the addition of traditional watermarks to generate content, but these can be easily removed by cropping. Meta AI claims that its invisible watermarks are resistant to such manipulations, providing a more robust solution.

The mainstreaming of generative AI tools has raised alarms about the potential for scams and misinformation. Scammers leverage these tools to create fake videos, audio, and images of public figures, leading to incidents where misleading content circulates widely.

In May, an AI-generated image depicting an explosion near the Pentagon briefly caused a dip in the stock market. Such incidents underscore the need for measures like invisible watermarking to ensure the authenticity and traceability of AI-generated content.

Notably, the human rights advocacy group Amnesty International fell victim to an AI-generated image depicting police brutality, emphasizing the urgency for safeguards against the misuse of AI-generated content. Meta’s introduction of invisible watermarks reflects a proactive approach to addressing the challenges associated with the proliferation of AI-generated content in the digital landscape.

Image: Wallpapers.com

Related posts

Tech Firms Contemplate Relocation Amidst Global AI Race

Robert Paul

Global Focus on AI Regulations as EU Nears Deal

Eva Moore

YouTube Hit by Deepfake Ripple Scam, Redditors Sound the Alarm

Robert Paul

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More