Telling apart what’s real and what’s fake is getting ever harder. Hence research by the University of Portsmouth’s Artificial Intelligence and Data Science (PAIDS) Research Centre, that ld to software to distinguish between fake and genuine images, as well as identify the source of the artificial image.
The solution, known as ‘DeepGuard’, combines three AI techniques: binary classification, ensemble learning, and multi-class classification. These methods enable the AI to learn from labelled data, making smarter and more reliable predictions. The researchers say it’s a tool for investigating and prosecuting criminal activity such as fraud, or for use by the media to ensure images used in their stories are authentic to prevent misinformation or unintentional bias.
DeepGuard has been developed by researchers led by Dr Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the University of Frères Mentouri in Algeria, and involving Dr Stavros Shiaeles from the University’s PAIDS Research Centre and School of Computing.
Dr Shiaeles said: “With ever evolving technological capabilities it will be a constant challenge to spot fake images with the human eye. Manipulated images pose a significant threat to our privacy and security as they can be used to forge documents for blackmail, undermine elections, falsify electronic evidence and damage reputations, and can even be used to incite harm, by adults, to children. People are also profiteering disingenuously on social media platforms like TikTok where images of models are being turned into characters and animated in different scenarios in games or for entertainment.
“DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts.”
The research, published in The Journal of Information Security and Applications, will also support further academic research in this area, with datasets available to academics. During its development, the team reviewed and analysed methods for both image manipulation and detection, focusing specifically on fake images involving facial and bodily alterations. They considered 255 research articles published between 2016 and 2023 that examined various techniques for detecting manipulated images – such as changes in expression, pose, voice, or other facial or bodily features.





