The new tool is now available for download and will allow creators to disrupt the activity of AI models that generate deepfake images.
Last weekend, the fifth version of Midgarney was launched, which produces impressive results (with thanks to God). However, beyond the celebrations, there are numerous legal issues surrounding various AI generators that have proliferated over the past year like mushrooms after the rain. The main concern is copyright infringement resulting from their use.
Not surprisingly, Midjourney, Stable AI (the company behind Stable Diffusion), and the DeviantArt website have been sued in a class action lawsuit, claiming that their tools are used to infringe the intellectual property of several artists.
Now, American researchers have presented a solution to this problem, at least from the creators’ side.
The Blur that will confuse the models Meet Glaze. This is a new tool developed by researchers at the University of Chicago and is designed for artists who want to prevent the ability of image generators to create images based on the styles of certain artists.
This can be done either by scanning images and paintings from the network of those artists and learning that style or by specific training on a database full of art, as DeviantArt managers did when training their DreamUp model.
It should be noted that unlike these three, in DALL-E 2, this could not happen because OpenAI installed built-in barriers in the model that prevent copyright infringement or the creation of images of humans.”
The article discusses a new tool called “Glaze” that aims to protect artists’ styles from being copied by generative models. The researchers behind Glaze explain that artists can use the tool to pass their images through it, which will teach the model their style and help it understand the important features in each creation.
After this initial process, Glaze will produce a layer of “smoothing” on top of the artwork, which includes slight changes that are not visible to the naked eye, so that when someone tries to fine-tune one of the generators based on the work of those same artists, they won’t be able to do so.
The method created by the researchers changes the style of the painting as understood by these generators, and when they try to mimic the style of the original artist(s), they will actually produce images with a completely different style from what they intended.
One of the researchers from the Department of Computer Science at the University of Chicago, Sean Shan, explains that it is not necessary to change all the information in the image to protect the artists: “All we need to do is change their stylistic cues.” It is important to note that Shan and the other researchers emphasize that the tool they have created is more suitable for different artistic styles than others and is not intended to be a permanent solution, but rather a step towards developing more successful tools in the field.
Of course, if the models have already been trained on the works, the new tool may not be of much help. The tool is currently available for download in beta at this link.