The poisoned images aren't intended to be viewed, rather scraped and pass a basic human screen. You wouldn't be able to denoise as you'd have to denoise the entire dataset, the entire point is that these are virtually undetectable from typical training set examples, but they can push prompt frequencies around at will with a small number of poisoned examples.
> the entire point is that these are virtually undetectable from typical training set examples
I'll repeat this point for clarity. After going over the paper again, denoising shouldn't affect this attack, it's the ability of plausible images to not be detected by human or AI discriminators (yet)