A team of researchers from the University of Chicago have come up with a new AI tool called Nightshade to prevent the use of online artwork for the training of generative AI models without consent.
Nightshade addresses this issue of copyright by transforming images into "poison" samples, making unauthorized model training costlier. Unlike Glaze, which defends against style mimicry, Nightshade is an offensive tool to distort feature representations within generative AI image models.
The team previously developed Glaze, a tool safeguarding artists from their styles being absorbed by AI models. Glaze cloaks images, preventing models from accurately learning an artist's distinctive features and hindering the creation of artificial copies.
The free tool imposes a small incremental price on scraped and trained data without authorization, deterring model trainers who ignore copyrights and directives. It optimizes changes to the image that are imperceptible to the human eye but significantly alters the AI model's perception. For example, an image of a cow may appear unchanged to humans but be interpreted by the model as a purse in the grass. Nightshade's effects persist despite standard image alterations and are not dependent on steganography.
Unlike Glaze, Nightshade is a group-level defense, disrupting models that scrape images without consent. While Glaze protects individual artists against style mimicry attacks, Nightshade safeguards all artists collectively. The two tools can be used together for comprehensive protection. However, Nightshade's impact is more noticeable in art with flat colors and smooth backgrounds.