310 research outputs found

    Cycle-consistent generative adversarial neural networks based low quality fingerprint enhancement

    Get PDF
    Distortions such as dryness, wetness, blurriness, physical damages and presence of dots in fingerprints are a detriment to a good analysis of them. Even though fingerprint image enhancement is possible through physical solutions such as removing excess grace on the fingerprint or recapturing the fingerprint after some time, these solutions are usually not user-friendly and time consuming. In some cases, the enhancements may not be possible if the cause of the distortion is permanent. In this paper, we are proposing an unpaired image-to-image translation using cycle-consistent adversarial networks for translating images from distorted domain to undistorted domain, namely, dry to not-dry, wet to not-wet, dotted to not-dotted, damaged to not-damaged, blurred to not-blurred. We use a database of low quality fingerprint images containing 11541 samples with dryness, wetness, blurriness, damages and dotted distortions. The database has been prepared by real data from VISA application centres and have been provided for this research by GEYCE Biometrics. For the evaluation of the proposed enhancement technique, we use VGG16 based convolutional neural network to assess the percentage of enhanced fingerprint images which are labelled correctly as undistorted. The proposed quality enhancement technique has achieved the maximum quality improvement for wetness fingerprints in which 94% of the enhanced wet fingerprints were detected as undistorted. © 2020, Springer Science+Business Media, LLC, part of Springer Nature

    Generative Fingerprint Augmentation against Membership Inference Attacks

    Get PDF
    openThis thesis aspires to provide a privacy protection mechanism for neural networks concerning fingerprints. Biometric identifiers, especially fingerprints, have become crucial in the last several years, from banking operations to daily smartphone usage. Using generative adversarial networks (GANs), we train models specialized in compressing and decompressing (Codecs) images in order to augment the data these models used during the learning process to provide additional privacy preservation over the identity of the fingerprints found in such datasets. We test and analyze our framework with custom membership inference attacks (MIA) to assess the quality of our defensive mechanism.This thesis aspires to provide a privacy protection mechanism for neural networks concerning fingerprints. Biometric identifiers, especially fingerprints, have become crucial in the last several years, from banking operations to daily smartphone usage. Using generative adversarial networks (GANs), we train models specialized in compressing and decompressing (Codecs) images in order to augment the data these models used during the learning process to provide additional privacy preservation over the identity of the fingerprints found in such datasets. We test and analyze our framework with custom membership inference attacks (MIA) to assess the quality of our defensive mechanism

    Deep Learning based Fingerprint Presentation Attack Detection: A Comprehensive Survey

    Full text link
    The vulnerabilities of fingerprint authentication systems have raised security concerns when adapting them to highly secure access-control applications. Therefore, Fingerprint Presentation Attack Detection (FPAD) methods are essential for ensuring reliable fingerprint authentication. Owing to the lack of generation capacity of traditional handcrafted based approaches, deep learning-based FPAD has become mainstream and has achieved remarkable performance in the past decade. Existing reviews have focused more on hand-cratfed rather than deep learning-based methods, which are outdated. To stimulate future research, we will concentrate only on recent deep-learning-based FPAD methods. In this paper, we first briefly introduce the most common Presentation Attack Instruments (PAIs) and publicly available fingerprint Presentation Attack (PA) datasets. We then describe the existing deep-learning FPAD by categorizing them into contact, contactless, and smartphone-based approaches. Finally, we conclude the paper by discussing the open challenges at the current stage and emphasizing the potential future perspective.Comment: 29 pages, submitted to ACM computing survey journa

    Artificial Fingerprinting for Generative Models: {R}ooting Deepfake Attribution in Training Data

    Get PDF

    Mic2Mic: Using Cycle-Consistent Generative Adversarial Networks to Overcome Microphone Variability in Speech Systems

    Full text link
    Mobile and embedded devices are increasingly using microphones and audio-based computational models to infer user context. A major challenge in building systems that combine audio models with commodity microphones is to guarantee their accuracy and robustness in the real-world. Besides many environmental dynamics, a primary factor that impacts the robustness of audio models is microphone variability. In this work, we propose Mic2Mic -- a machine-learned system component -- which resides in the inference pipeline of audio models and at real-time reduces the variability in audio data caused by microphone-specific factors. Two key considerations for the design of Mic2Mic were: a) to decouple the problem of microphone variability from the audio task, and b) put a minimal burden on end-users to provide training data. With these in mind, we apply the principles of cycle-consistent generative adversarial networks (CycleGANs) to learn Mic2Mic using unlabeled and unpaired data collected from different microphones. Our experiments show that Mic2Mic can recover between 66% to 89% of the accuracy lost due to microphone variability for two common audio tasks.Comment: Published at ACM IPSN 201

    An Overview on the Generation and Detection of Synthetic and Manipulated Satellite Images

    Get PDF
    Due to the reduction of technological costs and the increase of satellites launches, satellite images are becoming more popular and easier to obtain. Besides serving benevolent purposes, satellite data can also be used for malicious reasons such as misinformation. As a matter of fact, satellite images can be easily manipulated relying on general image editing tools. Moreover, with the surge of Deep Neural Networks (DNNs) that can generate realistic synthetic imagery belonging to various domains, additional threats related to the diffusion of synthetically generated satellite images are emerging. In this paper, we review the State of the Art (SOTA) on the generation and manipulation of satellite images. In particular, we focus on both the generation of synthetic satellite imagery from scratch, and the semantic manipulation of satellite images by means of image-transfer technologies, including the transformation of images obtained from one type of sensor to another one. We also describe forensic detection techniques that have been researched so far to classify and detect synthetic image forgeries. While we focus mostly on forensic techniques explicitly tailored to the detection of AI-generated synthetic contents, we also review some methods designed for general splicing detection, which can in principle also be used to spot AI manipulate imagesComment: 25 pages, 17 figures, 5 tables, APSIPA 202

    Towards a Robust Thermal-Visible Heterogeneous Face Recognition Approach Based on a Cycle Generative Adversarial Network

    Get PDF
    Security is a sensitive area that concerns all authorities around the world due to the emerging terrorism phenomenon. Contactless biometric technologies such as face recognition have grown in interest for their capacity to identify probe subjects without any human interaction. Since traditional face recognition systems use visible spectrum sensors, their performances decrease rapidly when some visible imaging phenomena occur, mainly illumination changes. Unlike the visible spectrum, Infrared spectra are invariant to light changes, which makes them an alternative solution for face recognition. However, in infrared, the textural information is lost. We aim, in this paper, to benefit from visible and thermal spectra by proposing a new heterogeneous face recognition approach. This approach includes four scientific contributions. The first one is the annotation of a thermal face database, which has been shared via Github with all the scientific community. The second is the proposition of a multi-sensors face detector model based on the last YOLO v3 architecture, able to detect simultaneously faces captured in visible and thermal images. The third contribution takes up the challenge of modality gap reduction between visible and thermal spectra, by applying a new structure of CycleGAN, called TV-CycleGAN, which aims to synthesize visible-like face images from thermal face images. This new thermal-visible synthesis method includes all extreme poses and facial expressions in color space. To show the efficacy and the robustness of the proposed TV-CycleGAN, experiments have been applied on three challenging benchmark databases, including different real-world scenarios: TUFTS and its aligned version, NVIE and PUJ. The qualitative evaluation shows that our method generates more realistic faces. The quantitative one demonstrates that the proposed TV -CycleGAN gives the best improvement on face recognition rates. Therefore, instead of applying a direct matching from thermal to visible images which allows a recognition rate of 47,06% for TUFTS Database, a proposed TV-CycleGAN ensures accuracy of 57,56% for the same database. It contributes to a rate enhancement of 29,16%, and 15,71% for NVIE and PUJ databases, respectively. It reaches an accuracy enhancement of 18,5% for the aligned TUFTS database. It also outperforms some recent state of the art methods in terms of F1-Score, AUC/EER and other evaluation metrics. Furthermore, it should be mentioned that the obtained visible synthesized face images using TV-CycleGAN method are very promising for thermal facial landmark detection as a fourth contribution of this paper

    Artificial Intelligence in the Creative Industries: A Review

    Full text link
    This paper reviews the current state of the art in Artificial Intelligence (AI) technologies and applications in the context of the creative industries. A brief background of AI, and specifically Machine Learning (ML) algorithms, is provided including Convolutional Neural Network (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs) and Deep Reinforcement Learning (DRL). We categorise creative applications into five groups related to how AI technologies are used: i) content creation, ii) information analysis, iii) content enhancement and post production workflows, iv) information extraction and enhancement, and v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, machine learning-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of machine learning in domains with fewer constraints, where AI is the `creator', remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human centric -- where it is designed to augment, rather than replace, human creativity
    corecore