131 research outputs found

    Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Get PDF
    Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors

    Imaging time series for the classification of EMI discharge sources

    Get PDF
    In this work, we aim to classify a wider range of Electromagnetic Interference (EMI) discharge sources collected from new power plant sites across multiple assets. This engenders a more complex and challenging classification task. The study involves an investigation and development of new and improved feature extraction and data dimension reduction algorithms based on image processing techniques. The approach is to exploit the Gramian Angular Field technique to map the measured EMI time signals to an image, from which the significant information is extracted while removing redundancy. The image of each discharge type contains a unique fingerprint. Two feature reduction methods called the Local Binary Pattern (LBP) and the Local Phase Quantisation (LPQ) are then used within the mapped images. This provides feature vectors that can be implemented into a Random Forest (RF) classifier. The performance of a previous and the two new proposed methods, on the new database set, is compared in terms of classification accuracy, precision, recall, and F-measure. Results show that the new methods have a higher performance than the previous one, where LBP features achieve the best outcome

    Non Minitia Fingerprint Recognition based on Segmentation

    Get PDF
    The biometric identification of a person has an advantage over traditional technique. Widely used biometric is Fingerprint to identify and authenticate a person. In this paper we propose Non Minutia Fingerprint Recognition based on Segmentation (NMFRS) algorithm. The variance of each block is determined by segmenting the finger print into 8* 8 blocks. Area of Interest (AOI) is obtained by removing the blocks with minimum variance. Features of Finger Print is obtained by applying Discrete Cosine Transform (DCT) on AOI and converted to major and minor non-overlapping blocks to determine variance. The percentage recognition rate is better in the proposed algorithm compared to the existing algorithms

    Deep Learning Architectures for Heterogeneous Face Recognition

    Get PDF
    Face recognition has been one of the most challenging areas of research in biometrics and computer vision. Many face recognition algorithms are designed to address illumination and pose problems for visible face images. In recent years, there has been significant amount of research in Heterogeneous Face Recognition (HFR). The large modality gap between faces captured in different spectrum as well as lack of training data makes heterogeneous face recognition (HFR) quite a challenging problem. In this work, we present different deep learning frameworks to address the problem of matching non-visible face photos against a gallery of visible faces. Algorithms for thermal-to-visible face recognition can be categorized as cross-spectrum feature-based methods, or cross-spectrum image synthesis methods. In cross-spectrum feature-based face recognition a thermal probe is matched against a gallery of visible faces corresponding to the real-world scenario, in a feature subspace. The second category synthesizes a visible-like image from a thermal image which can then be used by any commercial visible spectrum face recognition system. These methods also beneficial in the sense that the synthesized visible face image can be directly utilized by existing face recognition systems which operate only on the visible face imagery. Therefore, using this approach one can leverage the existing commercial-off-the-shelf (COTS) and government-off-the-shelf (GOTS) solutions. In addition, the synthesized images can be used by human examiners for different purposes. There are some informative traits, such as age, gender, ethnicity, race, and hair color, which are not distinctive enough for the sake of recognition, but still can act as complementary information to other primary information, such as face and fingerprint. These traits, which are known as soft biometrics, can improve recognition algorithms while they are much cheaper and faster to acquire. They can be directly used in a unimodal system for some applications. Usually, soft biometric traits have been utilized jointly with hard biometrics (face photo) for different tasks in the sense that they are considered to be available both during the training and testing phases. In our approaches we look at this problem in a different way. We consider the case when soft biometric information does not exist during the testing phase, and our method can predict them directly in a multi-tasking paradigm. There are situations in which training data might come equipped with additional information that can be modeled as an auxiliary view of the data, and that unfortunately is not available during testing. This is the LUPI scenario. We introduce a novel framework based on deep learning techniques that leverages the auxiliary view to improve the performance of recognition system. We do so by introducing a formulation that is general, in the sense that can be used with any visual classifier. Every use of auxiliary information has been validated extensively using publicly available benchmark datasets, and several new state-of-the-art accuracy performance values have been set. Examples of application domains include visual object recognition from RGB images and from depth data, handwritten digit recognition, and gesture recognition from video. We also design a novel aggregation framework which optimizes the landmark locations directly using only one image without requiring any extra prior which leads to robust alignment given arbitrary face deformations. Three different approaches are employed to generate the manipulated faces and two of them perform the manipulation via the adversarial attacks to fool a face recognizer. This step can decouple from our framework and potentially used to enhance other landmark detectors. Aggregation of the manipulated faces in different branches of proposed method leads to robust landmark detection. Finally we focus on the generative adversarial networks which is a very powerful tool in synthesizing a visible-like images from the non-visible images. The main goal of a generative model is to approximate the true data distribution which is not known. In general, the choice for modeling the density function is challenging. Explicit models have the advantage of explicitly calculating the probability densities. There are two well-known implicit approaches, namely the Generative Adversarial Network (GAN) and Variational AutoEncoder (VAE) which try to model the data distribution implicitly. The VAEs try to maximize the data likelihood lower bound, while a GAN performs a minimax game between two players during its optimization. GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. This causes the generator to create similar looking images with poor diversity of samples. In the last chapter of thesis, we focus to address this issue in GANs framework

    Towards a Robust Thermal-Visible Heterogeneous Face Recognition Approach Based on a Cycle Generative Adversarial Network

    Get PDF
    Security is a sensitive area that concerns all authorities around the world due to the emerging terrorism phenomenon. Contactless biometric technologies such as face recognition have grown in interest for their capacity to identify probe subjects without any human interaction. Since traditional face recognition systems use visible spectrum sensors, their performances decrease rapidly when some visible imaging phenomena occur, mainly illumination changes. Unlike the visible spectrum, Infrared spectra are invariant to light changes, which makes them an alternative solution for face recognition. However, in infrared, the textural information is lost. We aim, in this paper, to benefit from visible and thermal spectra by proposing a new heterogeneous face recognition approach. This approach includes four scientific contributions. The first one is the annotation of a thermal face database, which has been shared via Github with all the scientific community. The second is the proposition of a multi-sensors face detector model based on the last YOLO v3 architecture, able to detect simultaneously faces captured in visible and thermal images. The third contribution takes up the challenge of modality gap reduction between visible and thermal spectra, by applying a new structure of CycleGAN, called TV-CycleGAN, which aims to synthesize visible-like face images from thermal face images. This new thermal-visible synthesis method includes all extreme poses and facial expressions in color space. To show the efficacy and the robustness of the proposed TV-CycleGAN, experiments have been applied on three challenging benchmark databases, including different real-world scenarios: TUFTS and its aligned version, NVIE and PUJ. The qualitative evaluation shows that our method generates more realistic faces. The quantitative one demonstrates that the proposed TV -CycleGAN gives the best improvement on face recognition rates. Therefore, instead of applying a direct matching from thermal to visible images which allows a recognition rate of 47,06% for TUFTS Database, a proposed TV-CycleGAN ensures accuracy of 57,56% for the same database. It contributes to a rate enhancement of 29,16%, and 15,71% for NVIE and PUJ databases, respectively. It reaches an accuracy enhancement of 18,5% for the aligned TUFTS database. It also outperforms some recent state of the art methods in terms of F1-Score, AUC/EER and other evaluation metrics. Furthermore, it should be mentioned that the obtained visible synthesized face images using TV-CycleGAN method are very promising for thermal facial landmark detection as a fourth contribution of this paper

    Remote sensing in support of conservation and management of heathland vegetation

    Get PDF

    Development of polarization-resolved optical scanning microscopy imaging techniques to study biomolecular organizations

    Get PDF
    Light, as electromagnetic radiation, conveys energy through space and time via fluctuations in electric and magnetic fields. This thesis explores the interaction of light and biological structures through polarization-resolved imaging techniques. Light microscopy, and polarization analysis enable the examination of biological entities. Biological function often centers on chromatin, the genetic material composed of DNA wrapped around histone proteins within cell nuclei. This structure's chiral nature gives rise to interactions with polarized light. This research encompasses three main aspects. Firstly, an existing multimodal Circular Intensity Differential Scattering (CIDS) and fluorescence microscopy are upgraded into an open configuration to be integrated with other modalities. Secondly, a novel cell classification method employing CIDS and a phasor representation is introduced. Thirdly, polarization analysis of fluorescence emission is employed for pathological investigations. Accordingly, the thesis is organized into three chapters. Chapter 1 lays the theoretical foundation for light propagation and polarization, outlining the Jones and Stokes-Mueller formalisms. The interaction between light and optical elements, transmission, and reflection processes are discussed. Polarized light's ability to reveal image contrast in polarizing microscopes, linear and nonlinear polarization-resolved microscopy, and Mueller matrix microscopy as a comprehensive technique for studying biological structures are detailed. Chapter 2 focuses on CIDS, a label-free light scattering method, including a single point angular spectroscopy mode and scanning microscopy imaging. A significant upgrade of the setup is achieved, incorporating automation, calibration, and statistical analysis routines. An intuitive phasor approach is proposed, enabling image segmentation, cell discrimination, and enhanced interpretation of polarimetric contrast. As a result, image processing programs have been developed to provide automated measurements using polarization-resolved laser scanning microscopy imaging integrated with confocal fluorescence microscopy of cells and chromatin inside cell nuclei, including the use of new types of samples such as progeria cells. Chapter 3 applies a polarization-resolved two-photon excitation fluorescence (2PEF) microscopy to study multicellular cancerous cells. A homemade 2PEF microscope is developed for colon cancer cell analysis. The integration of polarization and fluorescence techniques leads to a comprehensive understanding of the molecular orientation within samples, particularly useful for cancer diagnosis. Overall, this thesis presents an exploration of polarization-resolved imaging techniques for studying biological structures, encompassing theory, experimental enhancements, innovative methodologies, and practical applications
    • …
    corecore