@Hansenaz This is a fascinating study.
The AI response, as is often the case, is comically wrong. But then again, it scrapes data posted by humans who are often comically wrong, and simply doesn't know any better.
As for the actual photograph searches, I think it has a very difficult time discerning between images that are not "pixel-for-pixel" matches. Cropping, angle, shadows, exposure, color, etc. all make one image of the same thing look
different from another image of the same thing ... to a computer.
I think the visual matching has gotten much better than it used to be, but more often than not, I believe that the images that return as potential matches are based more on
the text associated with the image than the image pixels. Images on HAZ that might return a result include those with labels assigned identifying the style ("gila petroglyph", etc.), the caption or photoset name, or even the name of the user who posts it. — Google recognizes that "hansenaz" is associated with numerous web pages containing photographs with images that regularly include words like "petroglyph" or "rock art", therefore there is a higher likelihood that other pages that include text with hansenaz and have similar visual patterns are potential matches. But the image search relies heavily on the
text which appears together with the images.
Copyright owners have provided basically all recorded media (music, movies, tv shows) to allow automated systems to detect when somebody uses their content without permission. This is why people regularly alter images by reversing them, speeding or slowing audio, or adding a second soundtrack over an existing soundtrack. It changes the digital signature so it doesn't appear to be an exact match anymore. Each photograph we take is unique from the beginning, even if the subject is the same. The digital signature is ... different.
A tool you see here from time-to-time is the augmented reality of peak finder apps. The topography of the entire US has been digitized so that when you look through your phone from a specific location, it can easily display the landscape around you and identify those geographic features. And yet, despite that data being available, I've yet to find a reliable source that can take a random photo of a random landscape and identify the location and the landmarks. That technology is likely to be reliable before a glyph-identifier because we already have the fully digitized topographical data compiled and available whereas the countless glyphs are not cataloged in such a way.
I enjoyed your research. It will be interesting to see how the technology improves in the next few years. I suspect it will get much, much better, and possibly very quickly.
* Next time, make sure to add a banana for scale.
