The rumblings I'm hearing are that this a) barely works with last-gen training processes b) does not work at all with more modern training processes (GPT-4V, LLaVA, even BLIP2 labelling [1]) and c) would not be especially challenging to mitigate against even should it become more effective and popular. The Authors' previous work, Glaze, also does not seem to be very effective despite dramatic proclamations to the contrary, so I think this might be a case of overhyping an academically interesting but real-world-impractical result.
The screenshots you sent in [1] are inference, not training. You need to get a Nightshaded image into the training set of an image generator in order for this to have any effect. When you give an image to GPT-4V, Stable Diffusion img2img, or anything else, you're not training the AI - the model is completely frozen and does not change at all[0].
I don't know if anyone else is still scraping new images into the generators. I've heard somewhere that OpenAI stopped scraping around 2021 because they're worried about training on the output of their own models[1]. Adobe Firefly claims to have been trained on Adobe Stock images, but we don't know if Adobe has any particular cutoffs of their own[2].
If you want an image that screws up inference - i.e. one that GPT-4V or Stable Diffusion will choke on - you want an adversarial image. I don't know if you can adversarially train on a model you don't have weights for, though I've heard you can generalize adversarial training against multiple independent models to really screw shit up[3].
[0] All learning capability of text generators come from the fact that they have a context window; but that only provides a short term memory of 2048 tokens. They have no other memory capability.
[1] The scenario of what happens when you do this is fancifully called Habsburg AI. The model learns from it's own biases, reinforcing them into stronger biases, while forgetting everything else.
[2] It'd be particularly ironic if the only thing Nightshade harms is the one AI generator that tried to be even slightly ethical.
[3] At the extremes, these adversarial images fool humans. Though, the study that did this intentionally only showed the images for a small period of time, the idea being that short exposures are akin to a feed-forward neural network with no recurrent computation pathways. If you look at them longer, it's obvious that it's a picture of one thing edited to look like another.
Hey you know what might not be AI generated post-2021? Almost everything run through Nightshade. So given it's defeated, which is pretty likely, artists have effectively tagged their own work for inclusion.
I mean that's more or less status quo isn't it? Big business does what it wants, common people can get fucked if they don't like it. Same as it ever was.
That's exactly right. It is just the variety of new ways in which common people get fucked that is dispiriting, with seemingly nothing capable of moving in the opposite direction.
Modern generative image models are trained on curated data, not raw internet data. Sometimes the captions are regenerated to fit the image better. Only high quality images with high quality descriptions.
I wouldn't call what Stable Diffusion et al are trained on "high quality". You need only look through the likes of LAION to see the kind of captions and images they get trained on.
It's not random but it's not particularly curated either. Most of the time, any curation is done afterwards.
Correct me if I'm wrong but I understand image generators as relying on auto-labeled images to understand what means what, and the point of this attack to make the auto-labelers mislabel the image, but as the top-level comment said it's seemingly not tricking newer auto-labelers.
not all are auto labelled, some are hand labelled, some are initially labelled with something like clip/blip/booru and then corrected a bit by hand. The newest thing though is using llm's with image support like GPT4 to label the images, which kind of does a much better job most of the time.
Your understanding of the attack was the same as mine, it injects just the right kinds of pixels to throw off the auto-labellers to misdirect what they are directing causing the tags to get shuffled around.
Also on reddit today some of the Stable Diffusion users are already starting to train using Nightshade so they can implement it as a negative model, which might or might not work, will have to see.
Even if no new images are being scraped to train the foundation text-to-image models, you can be certain that there is a small horde of folk still scraping to create datasets for training fine-tuned models, LoRAs, Textual Inversions, and all the new hotness training methods still being created each day.
If it doesn't work during inference I really doubt it will have any intended effect during training, there is simply too much signal and the added adversarial noise works on the frozen and small proxy model they used (CLIP image encoder I think) but it doesn't work on a larger model and trained on a different dataset, if there is any effect during training it will probably just be the model learning that it can't take shortcuts (the artifacts working on the proxy model showcase gaps in its visual knowledge).
Generative models like text-to-image have an encoder part (it could be explicit or not) that extract the semantic from the noised image, if the auto-labelers can correctly label the samples then the encoded trained on both actual and adversarial images will learn to not take the same shortcuts that the proxy model has taken making the model more robust, I cannot see an argument where this should be a negative thing for the model.
The context windows of LLMs are now significantly larger than 2048 tokens, and there are clever ways to autopopulate context window to remind it of things.
The animation when you change images makes it harder to see the difference, I opened the three images each in its own tab and the differences are more apparent when you change between each other instantly.
If you have to have both and instantly toggle between them to notice the difference, then it sounds like it’s doing its job well and is hard to notice the difference.
Kid me found 13 FPS in games to be a smooth and cursive experience.
Current me thinks 60 FPS is laggy.
Standards differ. I saw glazed images in the wild, was wondering why they have so much JPEG artifacts, until I saw the post of one of those anti-AI + glaze images on his profile.
That is a great mystery, to me it's as clear as if someone pasted a cartoon dog onto the image, it's extremely blatant and impossible to ignore by my normal human pattern recognition.
I'm looking at them on my iPhone 14 Pro and I am having a hard time seeing any meaningful difference that changes the way the artwork registers with me.
I can't really imagine a case where if I had only seen the AI edited one I would have any different reaction or response to viewing the piece of art compared to having only seen the original one.
But now that I double-check, I was comparing with the images zoomed to 200%. On desktop the artifacts are also noticeable at 100%, but not nearly as bad as in my previous comment.
I didn't see it immediately either, but there's a ton of added noise. The most noticeable bit for me was near the standing person's bent elbow, but there's a lot more that becomes obvious when flipping back and forth between browser tabs instead of swiping on Twitter.
I was on desktop and it looks like pretty heavy jpeg compression. Doesn't completely destroy the image, but it's pretty noticeable when blown up large enough.
Maybe it's more about "protecting" images that artists want to publicly share to advertise work, but it's not appropriate for final digital media, etc.
Seems obvious that the people stealing would be adjusting their process to negate these kinds of countermeasures all the time. I don't see this as an arms race the artists are going to win. Not like the LLM folks can consider actually paying their way...the business plan pretty much has "...by stealing everything we can get our hands on..." in the executive summary.
I think the point is that they're akin to a watermark.
Even before the current AI boom, plenty of artists have wanted to showcase their work/prove that it exists without necessarily making the highest quality original file public.
For example in accounts on image sites that are exposed to suspected scrapers but not to others. Scrapers will still see the real data, but they'll also run into stuff designed to mix up the training process.
The rumblings I'm hearing are that this a) barely works with last-gen training processes b) does not work at all with more modern training processes (GPT-4V, LLaVA, even BLIP2 labelling [1]) and c) would not be especially challenging to mitigate against even should it become more effective and popular. The Authors' previous work, Glaze, also does not seem to be very effective despite dramatic proclamations to the contrary, so I think this might be a case of overhyping an academically interesting but real-world-impractical result.
[1]: Courtesy of /u/b3sn0w on Reddit: https://imgur.com/cI7RLAq https://imgur.com/eqe3Dyn https://imgur.com/1BMASL4