The goal of camera model identification is to determine the manufacturer and model of an image's source camera. Camera model identification is an important task in multimedia forensics because it helps verify the origin of an image and uncover possible image forgeries. Forensic camera model identification is generally performed by searching an image for model-specific traces left by a camera's internal image processing components. Many techniques, including recent data-driven deep learning algorithms, have been developed to perform camera model identification. In the meantime, forensic researchers have discovered that existing camera model identification algorithms can be maliciously attacked by altering images without leaving visually distinguishable artifacts. These anti-forensic attacks arouse concerns about the robustness of camera model identification techniques and urge the need for effective defense strategies. In this thesis, we propose new algorithms to perform forensic camera model identification, and new anti-forensic attacks. We first introduce a highly accurate and robust camera model identification framework developed by fully exploiting demosaicing traces left by cameras' internal demosaicing process. In light of the complexity of demosaicing traces, we build an ensemble of statistical models to capture diverse demosaicing information in the form of content-dependent color value correlations. Diversity among these statistical models is critical for each model to capture a unique set of color correlations introduced by the demosaicing process. We obtain a diverse set of linear and non-linear demosaicing residuals and extract both intra-channel and inter-channel color correlations following a variety of geometric structures. The ensemble of collect diverse color correlations forms a comprehensive representation of the sophisticated demosaicing process inside a camera. This proposed framework not only achieves high camera model identification accuracy, but more importantly, it is robust to image post-processing operations and anti-forensic camera model attacks. Given recent popularity of deep learning algorithms, forensic researchers have started to build deep neural networks, especially convolutional neural networks, to perform camera model identification. In this thesis, we investigate the robustness of deep learning based camera model identification algorithms by developing anti-forensic camera model attacks to expose vulnerability of these algorithms. We propose a generative adversarial attack to perform targeted camera model falsification. Given full access to the camera model identification networks, this attack has been proven to be able to falsify camera models of images from arbitrary sources. Under black-box scenarios where no information about the camera model identification networks is available, we train a substitute network which mimics the camera model identification networks and provides gradient information to craft adversarial images.Ph.D., Electrical Engineering -- Drexel University, 201
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.