Recent advancements in deep learning have paved the way for a new category of image forensics algorithms that utilize contrastive machine learning. These algorithms are designed to capture general forensic characteristics from digital images that have undergone various image processing and post-processing histories. This is achieved by training a siamese network to compare the source similarity between pairs of image patches. Additionally, studies have demonstrated that these networks can not only perform image source identification but also be effectively applied to other forensic tasks such as manipulation detection, splicing detection, and localization. However, alongside these promising developments, researchers have also uncovered vulnerabilities in deep-learning-based methods. Adversarial examples and GAN-based attacks have revealed that deep-learning-based models can be deceived by introducing subtle perturbations. Previous research has shown that these deep-learning-based attacks represent a novel form of anti-forensic attack and have the potential to fool deep-learning-based forensic algorithms. Therefore, in the first part of this dissertation, I introduce a new series of attacks aimed at deceiving existing Siamese-based image forensic algorithms. Through extensive experiments, we demonstrate that GAN-based attacks can effectively target Siamese-based deep neural networks. These attacks encompass camera model identification, and image splicing detection and localization. The results of these attacks showcase their remarkable capability in outwitting Siamese-based forensic algorithms while rendering synthetic forensic traces imperceptible to the human eye. Meanwhile, recent developments in image editing and generation have made edited images prevalent on the internet. Detecting and tracing forged images can be instrumental in safeguarding image authenticity and potentially aiding investigations into the source of forgery. In the second part of this dissertation, we focus on combating modern forgery images. Firstly, we propose a method to trace the source of synthetic images. Through extensive experiments, we demonstrate our ability to identify the source architecture of synthetic images in an open-set scenario. Secondly, we introduce a new concept called the Forensic Knowledge Graph. This concept aims to provide a comprehensive prediction for various forensic tasks and automatically generate conclusions regarding the processing history of the images. We showcase an initial demo, evaluate its capabilities, and discuss future work based on the current status.
Metrics
46 File views/ downloads
113 Record Views
Details
Title
Image Forensics and Anti-Forensics for Generative AI
Creators
Shengbang Fang
Contributors
Matthew C. Stamm (Advisor)
Awarding Institution
Drexel University
Degree Awarded
Doctor of Philosophy (Ph.D.)
Publisher
Drexel University; Philadelphia, Pennsylvania
Number of pages
xii, 122 pages
Resource Type
Dissertation
Language
English
Academic Unit
College of Engineering (1970-2026); Electrical (and Computer) Engineering [Historical]; Drexel University