Back to Top

Can Digital Watermarks Stop Generative AI from Stealing Our Creativity?

Adobe

The latest executive order from the Biden White House aims to establish a comprehensive framework for the development of generative artificial intelligence, encompassing aspects such as content authentication. The order includes provisions for using digital watermarks to signify when digital assets created by the Federal government are generated by computers. This move and similar copy protection technologies could assist content creators in more securely authenticating their online works, especially in an era where generative AI misinformation is prevalent.

A quick history of watermarking

The origins of analog watermarking techniques trace back to Italy in 1282, where papermakers inserted thin wires into paper molds, creating subtly thinner areas in the sheet visible when held up to light. Initially used for authenticating the origin and production methods of a company’s products, analog watermarks evolved to convey concealed, encoded messages. By the 18th century, governments adopted this technology to prevent currency counterfeiting. Concurrently, color watermark techniques emerged, involving the sandwiching of dyed materials between paper layers.

Although the term “digital watermarking” was coined in 1992, its foundational technology was patented by the Muzac Corporation in 1954. Muzac’s system, operational until the 1980s, identified owned music by using a “notch filter” to block the audio signal at 1 kHz in specific bursts, akin to Morse Code, to store identification information.

For decades, advertisement monitoring and audience measurement firms, such as the Nielsen Company, have employed watermarking techniques to tag audio tracks of television shows, monitoring American households’ viewing habits. These steganographic methods have integrated into the modern Blu-Ray standard, like the Cinavia system, and government applications such as authenticating driver’s licenses, national currencies, and other sensitive documents. The Digimarc corporation, for instance, developed a watermark for packaging, printing a product’s barcode almost invisibly all over the box, enabling any digital scanner within line of sight to read it. This technology has found applications ranging from brand anti-counterfeiting to enhancing material recycling efficiencies.

The here and now

Contemporary digital watermarking follows similar principles, discreetly embedding additional information into various content types, including images, videos, or audio, using specialized encoding software. These machine-readable watermarks are typically imperceptible to human users. Unlike existing cryptographic protections like product keys or software protection dongles, watermarks don’t actively prevent unauthorized alterations or duplications of content. Instead, they serve as a record, indicating the content’s origin or identifying the copyright holder.

However, the system has its imperfections. Dr. Ben Zhao, Neubauer Professor of Computer Science at the University of Chicago, emphasized the absence of effective measures to safeguard copyrighted works from being used by generative AI models. According to him, there are currently no cryptographic or regulatory methods in place for such protection. He highlighted instances where opt-out lists were undermined by certain entities, making it challenging for users to opt out effectively.

Zhao expresses that while the White House’s executive order is “ambitious and covers tremendous ground,” the outlined plans so far lack substantive “technical details on how it would actually achieve the goals it set.”

He points out that many companies, not subject to regulatory or legal pressure, may not voluntarily watermark their generative AI output. In an adversarial environment where stakeholders are motivated to avoid or bypass regulations and oversight, voluntary measures prove ineffective.

Highlighting the profit-driven nature of commercial companies, Zhao emphasizes their inclination to avoid regulations in their best interests. He also raises the possibility of a future presidential administration dismantling Biden’s executive order and associated federal infrastructure, given that an executive order lacks the constitutional standing of congressional legislation. However, the current state of Congress, marked by deep polarization and dysfunction, makes the prospect of meaningful AI legislation in the near future highly unlikely. Bradford, a law professor at Columbia University, notes that enforcement mechanisms for watermarking schemes have primarily relied on industry players’ informal commitments.

How Content Credentials work

In response to the slow pace of government initiatives, industry-driven alternatives have become imperative. Notable entities, including Microsoft, the New York Times, CBC/Radio-Canada, and the BBC, initiated Project Origin in 2019 to safeguard content integrity across various platforms. Simultaneously, Adobe and its partners launched the Content Authenticity Initiative (CAI), focusing on the creator’s viewpoint. Eventually, CAI and Project Origin joined forces to establish the Coalition for Content Provenance and Authenticity (C2PA). Out of this collaborative effort emerged Content Credentials (CR), introduced by Adobe at its Max event in 2021.

CR augments images with additional information upon export or download, presenting it as a cryptographically secure manifest. Extracting data from the image or video header, such as creator details, capture location, timestamp, device information, and the use of generative AI systems like DALL-E or Stable Diffusion, CR allows websites to verify this information against the provenance claims in the manifest. When combined with watermarking technology, CR offers a distinctive authentication method that is resistant to easy removal, unlike EXIF and metadata (technical details added automatically by the capturing device or software) when shared on social media platforms, thanks to cryptographic file signing. This approach bears some resemblance to blockchain technology!

Typically, metadata faces challenges surviving various online workflows as content circulates on the internet because many online systems lack the built-in support to read or acknowledge this information, as explained by Digimarc Chief Product Officer Ken Sickles to Engadget.

Chief Technology Officer of Digimarc, Tony Rodriguez, drew an analogy to an envelope to illustrate this concept. He likened the valuable content intended for transmission to the inside of an envelope, where the watermark is embedded within the pixels, audio, or other media elements. Meanwhile, metadata and additional information are relegated to the external part of the “envelope.”

Even if someone manages to remove the watermark (which, as it turns out, is not overly difficult—simply screenshot the image and crop out the icon), the credentials can be reattached using Verify. This tool employs machine vision algorithms to compare uploaded images against its repository and, upon identification, reapplies the credentials. When encountering the image content in various online environments, users can verify its credentials by clicking on the CR icon, revealing the full manifest and enabling them to independently assess the information and make informed decisions about the trustworthiness of online content.

Sickles foresees these authentication systems functioning in complementary layers, akin to a home security setup that combines locks, deadbolts, cameras, and motion sensors to enhance coverage. “That’s the beauty of Content Credentials and watermarks together,” Sickles remarked. He highlighted that the combination becomes a significantly more robust foundation for authenticity and establishing the origin of an image compared to their individual capabilities.

Digimarc is proactively providing its watermark detection tool to generative AI developers and is in the process of integrating the Content Credentials standard into its pre-existing Validate online copy protection platform.

In practical applications, we are already witnessing the integration of the standard into tangible commercial products, such as the Leica M11-P, which automatically appends a CR credential to images upon capture. The New York Times has explored its utilization in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature, and Microsoft has incorporated it into Bing Image Creator and Bing AI chatbot. Sony is reportedly working on integrating the standard into its Alpha 9 III digital cameras, with firmware updates enabling CR on Alpha 1 and Alpha 7S III models slated for 2024. CR is also available across Adobe’s extensive suite of photo and video editing tools, including Illustrator, Adobe Express, Stock, and Behance. Adobe’s generative AI, Firefly, will automatically include non-personally identifiable information in a CR for certain features like generative fill (acknowledging the use of the generative feature without identifying the user), with other aspects being opt-in.

Nevertheless, the C2PA standard and front-end Content Credentials are still in early development and currently challenging to locate on social media. “I think it really comes down to the widespread adoption of these technologies and where it’s adopted, both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles remarked.

Nightshade: The CR alternative that’s deadly to databases

Certain security researchers, growing impatient with the slow development of laws and industry standards, have taken the initiative to tackle copy protection challenges themselves. The SAND Lab at the University of Chicago, for instance, has devised two robust copy protection systems tailored specifically for countering generative AIs.

One notable creation from Dr. Ben Zhao and his team is Glaze, a system designed for creators that disrupts a generative AI’s mimicry by leveraging the concept of adversarial examples. Glaze alters the pixels within a given artwork in a manner imperceptible to the human eye but appears dramatically different to a machine vision system. As generative AI systems are trained on these “glazed” images, they lose the ability to precisely replicate the intended artistic style. For instance, cubism might transform into a cartoonish style, and abstract artistic expressions could morph into anime. This innovation holds particular promise for renowned artists whose distinctive styles are frequently imitated, helping safeguard their commercially valuable artistic identities.

While Glaze is designed to proactively deflect the efforts of illicit data scrapers, SAND Lab’s latest tool, Nightshade, takes a more punitive approach. Nightshade subtly alters the pixels in a given image, but instead of confusing the models it’s trained with, as Glaze does, the poisoned image corrupts the entire training database it’s ingested into. This compels developers to manually remove each damaging image to resolve the issue; otherwise, the system will continually retrain on the compromised data, perpetuating the same issues.

Nightshade serves as a “last resort” for content creators and cannot be employed as a vector of attack. Dr. Ben Zhao likens it to putting hot sauce in one’s lunch as a deterrent for those stealing from the fridge. Zhao expresses little sympathy for the owners of models that Nightshade damages, stating that companies intentionally bypassing opt-out lists and do-not-scrape directives are fully aware of their actions, as downloading and training on someone’s content requires a deliberate effort.

Share Now

Leave a Reply

Your email address will not be published. Required fields are marked *

Read More