Author Topic: Glaze: Protection for images from being incorporated into training sets  (Read 4680 times)

Jubal

  • Megadux
    Executive Officer
  • Posts: 35619
  • Karma: 140
  • Awards Awarded for oustanding services to Exilian!
    • View Profile
    • Awards
So today I came across Glaze, the first attempt I've seen at a widely publicly accessible anti-generative model trap-tool:

https://glaze.cs.uchicago.edu/

The principle being that it makes very subtle, not easy for humans to detect, tweaks to an image to get AIs to read it as being stylistically very different to how we see it: using the difference between model input perception and human perception as a way to keep the base image but really confuse the model. Glaze mainly messes with the stylistics, to make it harder to do "X in the style of Y" type requests from systems like StableDiffusion and MidJourney.

I think we're going to see a ton more of this stuff for style protection, but also similar techniques for "gaming" LLM and image model outputs (once people can work out how to maximise product placement from these models, someone will make an advertising fortune). And there'll be outright "malware" inputs that try to break the set or get the AI to replicate damaging code or whatever. There are some big vulnerabilities and possibilities in this area that people haven't really been considering yet because the tech is so new that people are only just reacting to it.

A quote from Prof. Ben Zhao who worked on Glaze, via TechCrunch:
Quote
What we do is we try to understand how the AI model perceives its own version of what artistic style is. And then we basically work in that dimension — to distort what the model sees as a particular style. So it’s not so much that there’s a hidden message or blocking of anything… It is, basically, learning how to speak the language of the machine learning model, and using its own language — distorting what it sees of the art images in such a way that it actually has a minimal impact on how humans see. And it turns out because these two worlds are so different, we can actually achieve both significant distortion in the machine learning perspective, with minimal distortion in the visual perspective that we have as humans.

This comes from a fundamental gap between how AI perceives the world and how we perceive the world. This fundamental gap has been known for ages. It is not something that is new. It is not something that can be easily removed or avoided. It’s the reason that we have a task called ‘adversarial examples’ against machine learning. And people have been trying to fix that — defend against these things — for close to 10 years now, with very limited success,” he adds. “This gap between how we see the world and how AI model sees the world, using mathematical representation, seems to be fundamental and unavoidable… What we’re actually doing — in pure technical terms — is an attack, not a defence. But we’re using it as a defence.
The duke, the wanderer, the philosopher, the mariner, the warrior, the strategist, the storyteller, the wizard, the wayfarer...

Jubal

  • Megadux
    Executive Officer
  • Posts: 35619
  • Karma: 140
  • Awards Awarded for oustanding services to Exilian!
    • View Profile
    • Awards
Re: Glaze: Protection for images from being incorporated into training sets
« Reply #1 on: October 24, 2023, 12:02:14 PM »
An update on these systems, some impressive work on more advance data poisoning methods that can totally scramble what's generated and manipulate prompt outcomes:
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/

Quote
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.   


The worrying but interesting thing with this does come when you consider what bad actors could do: carpet open image databases with poisoning prompt-tweakers that will force as many outputs as possible to include the brand name of pepsi, or the Ukrainian or Russian flag colours, or anything else.
The duke, the wanderer, the philosopher, the mariner, the warrior, the strategist, the storyteller, the wizard, the wayfarer...

Pentagathus

  • King of the Wibulnibs
  • Posts: 2713
  • Karma: 20
    • View Profile
    • Awards
Re: Glaze: Protection for images from being incorporated into training sets
« Reply #2 on: October 25, 2023, 05:44:22 PM »
Hell yeah dudes, rage against the machines!

Othko97

  • SotK Beta
  • Patrikios
    Voting Member
  • Posts: 3520
  • Karma: 9
    • View Profile
    • Personal Site
    • Awards
Re: Glaze: Protection for images from being incorporated into training sets
« Reply #3 on: October 25, 2023, 09:25:35 PM »
The worrying but interesting thing with this does come when you consider what bad actors could do: carpet open image databases with poisoning prompt-tweakers that will force as many outputs as possible to include the brand name of pepsi, or the Ukrainian or Russian flag colours, or anything else.

This could certainly form a vector for harassment or unsavoury activity -- it's not hard to imagine someone training the model to associate a person they don't like with something unpleasant.  It would be a shame for open image databases to become poisoned like that, although my understanding of the attack (from a skim of the paper) is that it requires outputs from the model to fool it in further training, so actually pulling this off at scale for a targeted redefinition may be tricky.  Frankly though, I feel that preventing this is absolutely in the hands of the companies training and producing the models.  The technology is completely reliant on sourcing good input data, and by mass scraping without regard for the rights of the original creators they have brought this upon themselves.  The whole business reminds me strongly of Shoshana Zuboff's The Age of Surveillance Capitalism, wherein she argues that big tech has a pattern of infringing on people's privacy and rights faster than regulation can keep up, then spinning the loss of freedoms experienced by the masses as merely the price paid for the "brilliant" new technology.
I am Othko, He who fell from the highest of places, Lord of That Bit Between High Places and Low Places Through Which One Falls In Transit Between them!