34 research outputs found

    GANprintR: Improved Fakes and Evaluation of the State of the Art in Face Manipulation Detection

    Full text link
    © 2020 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe availability of large-scale facial databases, together with the remarkable progresses of deep learning technologies, in particular Generative Adversarial Networks (GANs), have led to the generation of extremely realistic fake facial content, raising obvious concerns about the potential for misuse. Such concerns have fostered the research on manipulation detection methods that, contrary to humans, have already achieved astonishing results in various scenarios. In this study, we focus on the synthesis of entire facial images, which is a specific type of facial manipulation. The main contributions of this study are four-fold: i) a novel strategy to remove GAN 'fingerprints' from synthetic fake images based on autoencoders is described, in order to spoof facial manipulation detection systems while keeping the visual quality of the resulting images; ii) an in-depth analysis of the recent literature in facial manipulation detection; iii) a complete experimental assessment of this type of facial manipulation, considering the state-of-the-art fake detection systems (based on holistic deep networks, steganalysis, and local artifacts), remarking how challenging is this task in unconstrained scenarios; and finally iv) we announce a novel public database, named iFakeFaceDB, yielding from the application of our proposed GAN-fingerprint Removal approach (GANprintR) to already very realistic synthetic fake images. The results obtained in our empirical evaluation show that additional efforts are required to develop robust facial manipulation detection systems against unseen conditions and spoof techniques, such as the one proposed in this studyThis work has been supported by projects: PRIMA (H2020-MSCA-ITN-2019-860315), TRESPASS-ETN (H2020-MSCA-ITN2019-860813), BIBECA (RTI2018-101248-B-I00 MINECO/FEDER), BioGuard (Ayudas Fundación BBVA a Equipos de Investigación Cientíifica 2017), Accenture, by NOVA LINCS (UIDB/04516/2020) with the financial support of FCT - Fundação para a Ciência e a Tecnologia, through national funds, and by FCT/MCTES through national funds and co-funded by EU under the project UIDB/EEA/50008/202

    How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals

    Full text link
    Fake portrait video generation techniques have been posing a new threat to the society with photorealistic deep fakes for political propaganda, celebrity imitation, forged evidences, and other identity related manipulations. Following these generation techniques, some detection approaches have also been proved useful due to their high classification accuracy. Nevertheless, almost no effort was spent to track down the source of deep fakes. We propose an approach not only to separate deep fakes from real videos, but also to discover the specific generative model behind a deep fake. Some pure deep learning based approaches try to classify deep fakes using CNNs where they actually learn the residuals of the generator. We believe that these residuals contain more information and we can reveal these manipulation artifacts by disentangling them with biological signals. Our key observation yields that the spatiotemporal patterns in biological signals can be conceived as a representative projection of residuals. To justify this observation, we extract PPG cells from real and fake videos and feed these to a state-of-the-art classification network for detecting the generative model per video. Our results indicate that our approach can detect fake videos with 97.29% accuracy, and the source model with 93.39% accuracy.Comment: To be published in the proceedings of 2020 IEEE/IAPR International Joint Conference on Biometrics (IJCB

    Impact and Detection of Facial Beautification in Face Recognition: An Overview

    Get PDF
    International audienceFacial beautification induced by plastic surgery, cosmetics or retouching has the ability to substantially alter the appearance of face images. Such types of beautification can negatively affect the accuracy of face recognition systems. In this work, a conceptual categorisation of beautification is presented, relevant scenarios with respect to face recognition are discussed, and related publications are revisited. Additionally, technical considerations and trade-offs of the surveyed methods are summarized along with open issues and challenges in the field. This survey is targeted to provide a comprehensive point of reference for biometric researchers and practitioners working in the field of face recognition, who aim at tackling challenges caused by facial beautification

    VideoForensicsHQ: Detecting High-quality Manipulated Face Videos

    Get PDF
    There are concerns that new approaches to the synthesis of high quality face videos may be misused to manipulate videos with malicious intent. The research community therefore developed methods for the detection of modified footage and assembled benchmark datasets for this task. In this paper, we examine how the performance of forgery detectors depends on the presence of artefacts that the human eye can see. We introduce a new benchmark dataset for face video forgery detection, of unprecedented quality. It allows us to demonstrate that existing detection techniques have difficulties detecting fakes that reliably fool the human eye. We thus introduce a new family of detectors that examine combinations of spatial and temporal features and outperform existing approaches both in terms of detection accuracy and generalization.Comment: ICME 2021 camera-read

    What Influences Influencers? Hiding Popularity Signals and Influencer Behavior

    Get PDF
    The burgeoning popularity of social media has shifted how social media users share and seek information through online platforms. Social media users are often motivated to show the “perfect side” of themselves on the platform, resulting in sharing manipulated appearances and positive aspects of their lives in order to garner more “likes” when comparing their popularity to others. Thus, social media users may often face inauthentic information, which may affect their behaviors on the platform. In this study, we utilize a change in Instagram policy—where they hide the number of likes from the platform— which started in September 2019 in East Asia. Specifically, we examine influencers’ post-generating behavior and post characteristics (e.g., whether it is focused on product vs influencers themselves and the degree of image manipulation). The results show that the number of endorsement postings increases, and influencers are more likely to generate influencer-focused postings after the intervention. In addition, we find that such effects are accentuated when influencers have a the larger follower base. Lastly, our findings suggest that the economic benefit (e.g., total weekly sales) that influencers gain increases after the intervention; however, such an effect is attenuated with influencers having a larger number of followers
    corecore