Watermarking has been used to identify images for quite a long time. Visible watermarking is used to “brand” an image so that if it gets used without the owner permission, and the owners finds out, he has the possibility of complaining. The “visibility part”, that is a logo or text over imposed on the image should be a psychological deterrent in using that image in an improper way: the user clearly sees the image is owned by somebody and the people seeing that image used in a specific context can immediately tell its ownership (the logo/text is also, somehow, “ruining” the image).
Using Photoshop you can get rid of the visible watermarking but that takes time and it is not worth the effort. Hence a visible watermarking is effective in decreasing the probability of that image being used in an improper way. Also, if you know what the logo/text looks like and where it is positioned in the image it is not difficult to create an application that can roam the web to detect improper use.
In July 2017, Google researchers presented a paper at the Computer Vision and Pattern Recognition Conference in Honolulu, Hi, showing that an AI based algorithm can easily search for logos and watermarking text and remove them automatically thus making visible watermarking useless.
Here is where IMATAG comes in. It is a new company that has created a watermarking technology to protect images, photos, with an invisible watermarking. You cannot see the watermarking with your eyes nor can an Artificial Intelligence algorithm designed to understand an image and its components. The technology is resistant to transformation, like cropping, upscaling and downscaling (things you do with Photoshop like programs to adapt an image to your needs). It also support an easy detection through bots roaming the web in search of that image. Of course it does not provide a visual deterrent, since a person cannot see the watermarking. A possibility may be to include in the photo that it has been protected by an invisible watermarking but that would work only for the user of the photo, since that warning can be removed easily, as Google has shown.
Basically, we are now able to create an artificial intelligence, like our own intelligence, that can cheat… It is not surprising, of course, but it is somewhat scaring to see examples of artificial intelligence put to work in a wrong way. And, of course, one can easily imagine even worse examples. We have already heard of intelligent bombs (that are often acting in a stupid way) and here it is also debatable what “intelligent” means.
For sure we have to confront new ethical issues facing a “mind” we created, and these are only to become more difficult as these “minds” will take a will of their own.