391 research outputs found

    Do GANs leave artificial fingerprints?

    Full text link
    In the last few years, generative adversarial networks (GAN) have shown tremendous potential for a number of applications in computer vision and related fields. With the current pace of progress, it is a sure bet they will soon be able to generate high-quality images and videos, virtually indistinguishable from real ones. Unfortunately, realistic GAN-generated images pose serious threats to security, to begin with a possible flood of fake multimedia, and multimedia forensic countermeasures are in urgent need. In this work, we show that each GAN leaves its specific fingerprint in the images it generates, just like real-world cameras mark acquired images with traces of their photo-response non-uniformity pattern. Source identification experiments with several popular GANs show such fingerprints to represent a precious asset for forensic analyses

    Analysis of adversarial attacks against CNN-based image forgery detectors

    Full text link
    With the ubiquitous diffusion of social networks, images are becoming a dominant and powerful communication channel. Not surprisingly, they are also increasingly subject to manipulations aimed at distorting information and spreading fake news. In recent years, the scientific community has devoted major efforts to contrast this menace, and many image forgery detectors have been proposed. Currently, due to the success of deep learning in many multimedia processing tasks, there is high interest towards CNN-based detectors, and early results are already very promising. Recent studies in computer vision, however, have shown CNNs to be highly vulnerable to adversarial attacks, small perturbations of the input data which drive the network towards erroneous classification. In this paper we analyze the vulnerability of CNN-based image forensics methods to adversarial attacks, considering several detectors and several types of attack, and testing performance on a wide range of common manipulations, both easily and hardly detectable

    A Full-Image Full-Resolution End-to-End-Trainable CNN Framework for Image Forgery Detection

    Full text link
    Due to limited computational and memory resources, current deep learning models accept only rather small images in input, calling for preliminary image resizing. This is not a problem for high-level vision problems, where discriminative features are barely affected by resizing. On the contrary, in image forensics, resizing tends to destroy precious high-frequency details, impacting heavily on performance. One can avoid resizing by means of patch-wise processing, at the cost of renouncing whole-image analysis. In this work, we propose a CNN-based image forgery detection framework which makes decisions based on full-resolution information gathered from the whole image. Thanks to gradient checkpointing, the framework is trainable end-to-end with limited memory resources and weak (image-level) supervision, allowing for the joint optimization of all parameters. Experiments on widespread image forensics datasets prove the good performance of the proposed approach, which largely outperforms all baselines and all reference methods.Comment: 13 pages, 12 figures, journa

    Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers

    Full text link
    Deep neural networks provide unprecedented performance in all image classification problems, taking advantage of huge amounts of data available for training. Recent studies, however, have shown their vulnerability to adversarial attacks, spawning an intense research effort in this field. With the aim of building better systems, new countermeasures and stronger attacks are proposed by the day. On the attacker's side, there is growing interest for the realistic black-box scenario, in which the user has no access to the neural network parameters. The problem is to design efficient attacks which mislead the neural network without compromising image quality. In this work, we propose to perform the black-box attack along a low-distortion path, so as to improve both the attack efficiency and the perceptual quality of the adversarial image. Numerical experiments on real-world systems prove the effectiveness of the proposed approach, both in benchmark classification tasks and in key applications in biometrics and forensics.Comment: 8 pages, journa

    Are GAN generated images easy to detect? A critical analysis of the state-of-the-art

    Full text link
    The advent of deep learning has brought a significant improvement in the quality of generated media. However, with the increased level of photorealism, synthetic media are becoming hardly distinguishable from real ones, raising serious concerns about the spread of fake or manipulated information over the Internet. In this context, it is important to develop automated tools to reliably and timely detect synthetic media. In this work, we analyze the state-of-the-art methods for the detection of synthetic images, highlighting the key ingredients of the most successful approaches, and comparing their performance over existing generative architectures. We will devote special attention to realistic and challenging scenarios, like media uploaded on social networks or generated by new and unseen architectures, analyzing the impact of suitable augmentation and training strategies on the detectors' generalization ability.Comment: 7 pages, 5 figures, conferenc

    Rating the incidence of iatrogenic vascular injuries in thoracic and lumbar spine surgery as regards the approach: A PRISMA-based literature review

    Get PDF
    Purpose: To assess the rate, timing of diagnosis, and repairing strategies of vascular injuries in thoracic and lumbar spine surgery as their relationship to the approach. Methods: PubMed, Medline, and Embase databases were utilized for a comprehensive literature search based on keywords and mesh terms to find articles reporting iatrogenic vascular injury during thoracic and lumbar spine surgery. English articles published in the last ten years were selected. The search was refined based on best match and relevance. Results: Fifty-six articles were eligible, for a cumulative volume of 261 lesions. Vascular injuries occurred in 82% of instrumented procedures and in 59% during anterior approaches. The common iliac vein (CIV) was the most involved vessel, injured in 49% of anterior lumbar approaches. Common iliac artery, CIV, and aorta were affected in 40%, 28%, and 28% of posterior approaches, respectively. Segmental arteries were injured in 68% of lateral approaches. Direct vessel laceration occurred in 81% of cases and recognized intraoperatively in 39% of cases. Conclusions: Incidence of iatrogenic vascular injuries during thoracic and lumbar spine surgery is low but associated with an overall mortality rate up to 65%, of which less than 1% for anterior approaches and more than 50% for posterior ones. Anterior approaches for instrumented procedures are at risk of direct avulsion of CIV. Posterior instrumented fusions are at risk for injuries of iliac vessels and aorta. Lateral routes are frequently associated with lesions of segmental vessels. Suture repair and endovascular techniques are useful in the management of these severe complications
    • …
    corecore