17 research outputs found

    Source Anonymization of Digital Images: A Counterā€“Forensic Attack on PRNU based Source Identification Techniques

    Get PDF
    A lot of photographers and human rights advocates need to hide their identity while sharing their images on the internet. Hence, sourceā€“anonymization of digital images has become a critical issue in the present digital age. The current literature contains a number of digital forensic techniques for ā€œsourceā€“identiļ¬cationā€ of digital images, one of the most eļ¬ƒcient of them being Photoā€“Response Nonā€“Uniformity (PRNU) sensor noise pattern based source detection. PRNU noise pattern being unique to every digital camera, such techniques prove to be highly robust way of sourceā€“identiļ¬cation. In this paper, we propose a counterā€“forensic technique to mislead this PRNU sensor noise pattern based sourceā€“identiļ¬cation, by using a median ļ¬lter to suppress PRNU noise in an image, iteratively. Our experimental results prove that the proposed method achieves considerably higher degree of source anonymity, measured as an inverse of Peakā€“toā€“Correlation Energy (PCE) ratio, as compared to the stateā€“ofā€“theā€“art

    DIPPAS: A Deep Image Prior PRNU Anonymization Scheme

    Get PDF
    Source device identification is an important topic in image forensics since it allows to trace back the origin of an image. Its forensics counter-part is source device anonymization, that is, to mask any trace on the image that can be useful for identifying the source device. A typical trace exploited for source device identification is the Photo Response Non-Uniformity (PRNU), a noise pattern left by the device on the acquired images. In this paper, we devise a methodology for suppressing such a trace from natural images without significant impact on image quality. Specifically, we turn PRNU anonymization into an optimization problem in a Deep Image Prior (DIP) framework. In a nutshell, a Convolutional Neural Network (CNN) acts as generator and returns an image that is anonymized with respect to the source PRNU, still maintaining high visual quality. With respect to widely-adopted deep learning paradigms, our proposed CNN is not trained on a set of input-target pairs of images. Instead, it is optimized to reconstruct the PRNU-free image from the original image under analysis itself. This makes the approach particularly suitable in scenarios where large heterogeneous databases are analyzed and prevents any problem due to lack of generalization. Through numerical examples on publicly available datasets, we prove our methodology to be effective compared to state-of-the-art techniques

    Conditional Adversarial Camera Model Anonymization

    Get PDF
    The model of camera that was used to capture a particular photographic image (model attribution) is typically inferred from high-frequency model-specific artifacts present within the image. Model anonymization is the process of transforming these artifacts such that the apparent capture model is changed. We propose a conditional adversarial approach for learning such transformations. In contrast to previous works, we cast model anonymization as the process of transforming both high and low spatial frequency information. We augment the objective with the loss from a pre-trained dual-stream model attribution classifier, which constrains the generative network to transform the full range of artifacts. Quantitative comparisons demonstrate the efficacy of our framework in a restrictive non-interactive black-box setting.Comment: ECCV 2020 - Advances in Image Manipulation workshop (AIM 2020
    corecore