Recognising fakes |
ChatGPT communicates in a human way and can respond to individual questions. / Foto: Adobe Stock/Timon
No less impressive are artificial intelligences that create realistic images, graphics, works of art, entire books and films from just a few words - although the latter are not yet perfect. Pope Francis in a white down jacket or former US President Donald Trump being arrested by police officers are probably two of the best-known examples of images. The works of artificial intelligence appear deceptively real and are sometimes difficult to distinguish from human works. This is particularly problematic with images. They remain in the memory, even if it is later realised that they are fakes. Some companies have therefore built filters into their artificial intelligence systems to prevent the use of well-known personalities or the generation of pornographic content and fake news. Others have not. In addition, the source codes of some programmes are freely accessible and modifiable for anyone interested. Anyone with the necessary technical equipment can therefore theoretically create their own model.
Some developers are working in parallel on detection software designed to detect artificially generated images and texts. Well-known examples that aim to recognise texts include GPTZero, AI Text Classifier and platforms such as Copyleaks.com and Scribbr.de. However, the problem with this is that any algorithm that can detect the works of artificial intelligence can also be used to train the programmes. Many scientists are therefore calling for mandatory labelling of images and texts generated with artificial intelligence in the form of watermarks that are incorporated into the source code or inserted into the training data. Alternatively, there is also the option of labelling human products.
This makes it all the more important at the moment to critically scrutinise images and texts. In addition, there are still some clues that can help to expose artificially generated material, at least in the case of images. For example, the Massachusetts Institute of Technology (MIT) recommends focussing on the face in images and videos, as deepfakes almost always involve changes to the face. Does the skin look too smooth or wrinkled? Does the age of the face match the rest of the body? Do the shadows around the eyes and eyebrows look real? Are the reflections in glasses realistic? Does the facial hair appear natural? Is there perhaps too much or too little of it? How do the dimples look? According to the experts, you should pay attention to the blinking of eyes in videos. Deepfakes often give themselves away by the fact that people blink too much or too little. Lip movements or the appearance of the teeth can also be conspicuous.
It is already known that people can train themselves to develop a feel for real and fake images. As part of the »Detect Fakes« project at Nothwestern University Kellogg School of Management, anyone can try this out and test how well they can distinguish between real and fake (https://detectfakes.kellogg.northwestern.edu/). The entries are collected anonymously and used for research purposes.
This small experiment makes it clear where the weak points of artificially generated texts lie: they do not have to be error-free. ChatGPT itself points this out after each response. In addition, the user is not informed about the sources on which ChatGPT bases its answers. This is because the application is not a search engine and does not currently access one. ChatGPT is trained to generate texts that sound as human as possible, based on millions of training texts that can come from anywhere.
Nevertheless, it is hard to imagine schools without ChatGPT. According to a survey conducted by the digital association BITKOM last spring, only 8 per cent of pupils have never heard or read about ChatGPT. This figure is likely to have fallen even further in the meantime. Generative artificial intelligence such as ChatGPT is impressive, but the areas of application for artificial intelligence are far more diverse. For example, robots can learn how to grasp an object. Artificial intelligence is used in image recognition and has applications in medical diagnostics, facial recognition and autonomous driving. They can optimise processes and recognise patterns and language. Many systems can already be found in our everyday lives and are used quite naturally, such as voice control systems or smart home components. Training data is always important for safe use. If it contains errors, this can lead to system malfunctions.
Deutsch/German | Englisch/English |
---|---|
Alltag | everyday life |
Algorithmus | algorithm |
Anonym | anonymous |
Bilderkennung | image recognition |
detektieren | detection |
Echt | real |
erkennen | recognise |
Fehlfunktion | malfunction |
Filter | filter |
Gesichtserkennung | face recognition |
Künstliche Intelligenz | artificial intelligence |
Muster | patterns |
Plattform | platform |
Prozesse | processes |
Quellcode | source code |
Roboter | robots |
Sprachsteuerung | voice control |
Suchmaschine | search engine |
unecht | fake |
Wasserzeichen | watermark |