Connect with us


1 AI picture draws on 21 human works, and the latest reverse search engine will expose AI’s old background



Feed a picture drawn by AI, and immediately find out which human works it has borrowed from.

One-time traceability 20+ drawingskind of.

This reverse search engine is going to clean up the old background of AI painting.

Its purpose is simple, to seek justice for human artists.

Just look at the name of the website and you’ll understand.Stable Attribution(Stable Attribution), corresponding to Stable Diffusion.

Click on any picture traced to, and there will be asource and authordialog box.

According to the website, through the original information submitted by everyone, the contribution of human artists to AI painting can be faced up to, and the remuneration may be further distributed in proportion, so that human artists and AI painting can be turned into jade.

On Haker News, its popularity has exceeded 700 points as soon as it was released.

Netizens also agreed:

This is better than a “one size fits all” ban on AI-generated content.

One-click drawing and traceability for AI

The website itself has given some examples, you can directly click to try the reverse search.

For example, a seemingly real portrait of a woman may be synthesized by AI after referring to multiple faces.

You can also try to find a picture generated by Stable Diffusion.

We used Stable Diffusion to generate a set of snow images and tested it with the first image.

The result given by Stable Attribution is that of Aunt Sauce, that is, the left column is the picture it has referred to.

But there will also be failure cases. It is also a snow scene, but this time the website says that there are no results.

In principle, the official said that the current algorithm of Stable Attribution is to decode the image generated by AI, which is found from the Stable Diffusion model training data set.resemblancecase.

And Stable Diffusion uses the public dataset LAION-5B, so it is not very troublesome to find it.

At the same time, the website also stated that they have no copyright for the traced pictures, and they will not use them to train models.

The company behind it is called Chroma, which was founded in 2022. The main business is to do open source work on embedded database, embedded search and embedded analysis, which can provide support for the training, fine-tuning and use of algorithm models.

Now it has received investment from some people in the field of machine learning. The companies they have invested in include Hugging Face, Replit, etc.

The main creative team consists of two people, both of whom have worked in major technology companies before.

Anton Troynikov was a research engineer at Facebook Reality Labs (researching AR/VR) before he started his business. He also worked on CV and software earlier.

Jeff Huber was previously the co-founder of Standard Cyborg, a company whose star product is 3D printed prosthetics.

The company is still actively recruiting.

Netizen: Will it be a little exaggerated?

Although the idea is very good, many netizens pointed out, is the principle behind this website really reliable?

The result is satisfactory, but it seems to oversimplify the problem and misinterpret the meaning of Attribution.

This website believes that the pictures generated by AI must not exist in the original training data set. Therefore, finding some pictures that are very similar to it within the scope of the data set can be considered as a “reference picture”.

Someone pointed out that if you take a hand-drawn drawing from 1850, it is not generated by AI, and it will not exist in the data set. Will the website also give some so-called similar reference results?

Quest, we threw in an emoticon pack and tried it out, but emmm… it’s messing around.

Some people also said that it feels that the images provided by the website do not contain some detailed elements of AI-generated images.

The official also stated that there are still many problems that need to be optimized in the current algorithm, such as too much noise in the training process, and there are many errors and redundancies in the training data.

But these problems can be solved, and they are also looking for ways to improve.

Some netizens also plan to look at this matter with a long-term perspective:

Even if it’s just looking for similar results, it’s not an attribution, and I think it’s better.

After all, on the other side, the battle for rights protection between various photo sites and Stable Diffusion is still going on.

Recently, the American version of Visual China Getty escalated legal proceedings, saying that Stable Diffusion used their 12 million pictures without authorization.

Legal sources said that Getty’s lawsuit may be more promising than the rights protection proposed by the artists.

Because the artists raised the threat of AI painting to the profession, and Getty’s infringement is almost certain, and even generated pictures with Getty watermarks before.

But given the unprecedented nature of such a lawsuit, the outcome is hard to predict.

There are also photo sites that choose to “join if you can’t beat it”, and build an AI image generator with OpenAI, and provide compensation to those who contribute to the data set.

The attitude of Stable Attribution is: AI should serve human beings like all other technologies, and we should not alienate it.

What do you think?

One More Thing

u1s1, the opening animation of this website is not done yet, if you are interested, you can go and play~

Reference link:

  • [1]

  • [2]

  • [3]

This article comes from the WeChat public account:Qubit (ID: QbitAI)Author: Mingmin

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *