Recognising fakes |
ChatGPT communicates in a human way and can respond to individual questions. / Foto: Adobe Stock/Timon
No less impressive are artificial intelligences that create realistic images, graphics, works of art, entire books and films from just a few words - although the latter are not yet perfect. Pope Francis in a white down jacket or former US President Donald Trump being arrested by police officers are probably two of the best-known examples of images. The works of artificial intelligence appear deceptively real and are sometimes difficult to distinguish from human works. This is particularly problematic with images. They remain in the memory, even if it is later realised that they are fakes. Some companies have therefore built filters into their artificial intelligence systems to prevent the use of well-known personalities or the generation of pornographic content and fake news. Others have not. In addition, the source codes of some programmes are freely accessible and modifiable for anyone interested. Anyone with the necessary technical equipment can therefore theoretically create their own model.
Some developers are working in parallel on detection software designed to detect artificially generated images and texts. Well-known examples that aim to recognise texts include GPTZero, AI Text Classifier and platforms such as Copyleaks.com and Scribbr.de. However, the problem with this is that any algorithm that can detect the works of artificial intelligence can also be used to train the programmes. Many scientists are therefore calling for mandatory labelling of images and texts generated with artificial intelligence in the form of watermarks that are incorporated into the source code or inserted into the training data. Alternatively, there is also the option of labelling human products.
This makes it all the more important at the moment to critically scrutinise images and texts. In addition, there are still some clues that can help to expose artificially generated material, at least in the case of images. For example, the Massachusetts Institute of Technology (MIT) recommends focussing on the face in images and videos, as deepfakes almost always involve changes to the face. Does the skin look too smooth or wrinkled? Does the age of the face match the rest of the body? Do the shadows around the eyes and eyebrows look real? Are the reflections in glasses realistic? Does the facial hair appear natural? Is there perhaps too much or too little of it? How do the dimples look? According to the experts, you should pay attention to the blinking of eyes in videos. Deepfakes often give themselves away by the fact that people blink too much or too little. Lip movements or the appearance of the teeth can also be conspicuous.