26 research outputs found

    Bayesian Adaptive Bandwidth Kernel Density Estimation of Irregular Multivariate Distributions

    Get PDF
    Kernel density estimation is an important technique for understanding the distributional properties of data. Some investigations have found that the estimation of a global bandwidth can be heavily affected by observations in the tail. We propose to categorize data into low- and high-density regions, to which we assign two different bandwidths called the low-density adaptive bandwidths. We derive the posterior of the bandwidth parameters through the Kullback-Leibler information. A Bayesian sampling algorithm is presented to estimate the bandwidths. Monte Carlo simulations are conducted to examine the performance of the proposed Bayesian sampling algorithm in comparison with the performance of the normal reference rule and a Bayesian sampling algorithm for estimating a global bandwidth. According to Kullback-Leibler information, the kernel density estimator with low-density adaptive bandwidths estimated through the proposed Bayesian sampling algorithm outperforms the density estimators with bandwidth estimated through the two competitors. We apply the low-density adaptive kernel density estimator to the estimation of the bivariate density of daily stock-index returns observed from the U.S. and Australian stock markets. The derived conditional distribution of the Australian stock-index return for a given daily return in the U.S. market enables market analysts to understand how the former market is associated with the latter.conditional density; global bandwidth; Kullback-Leibler information; marginal likelihood; Markov chain Monte Carlo; S&P500 index

    Polarimetric Thermal to Visible Face Verification via Self-Attention Guided Synthesis

    Full text link
    Polarimetric thermal to visible face verification entails matching two images that contain significant domain differences. Several recent approaches have attempted to synthesize visible faces from thermal images for cross-modal matching. In this paper, we take a different approach in which rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for verification. The proposed synthesis network is based on the self-attention generative adversarial network (SAGAN) which essentially allows efficient attention-guided image synthesis. Extensive experiments on the ARL polarimetric thermal face dataset demonstrate that the proposed method achieves state-of-the-art performance.Comment: This work is accepted at the 12th IAPR International Conference On Biometrics (ICB 2019

    Cross-Domain Identification for Thermal-to-Visible Face Recognition

    Get PDF
    Recent advances in domain adaptation, especially those applied to heterogeneous facial recognition, typically rely upon restrictive Euclidean loss functions (e.g., L2L_2 norm) which perform best when images from two different domains (e.g., visible and thermal) are co-registered and temporally synchronized. This paper proposes a novel domain adaptation framework that combines a new feature mapping sub-network with existing deep feature models, which are based on modified network architectures (e.g., VGG16 or Resnet50). This framework is optimized by introducing new cross-domain identity and domain invariance loss functions for thermal-to-visible face recognition, which alleviates the requirement for precisely co-registered and synchronized imagery. We provide extensive analysis of both features and loss functions used, and compare the proposed domain adaptation framework with state-of-the-art feature based domain adaptation models on a difficult dataset containing facial imagery collected at varying ranges, poses, and expressions. Moreover, we analyze the viability of the proposed framework for more challenging tasks, such as non-frontal thermal-to-visible face recognition

    Cross-Domain Identification for Thermal-to-Visible Face Recognition

    Get PDF
    Recent advances in domain adaptation, especially those applied to heterogeneous facial recognition, typically rely upon restrictive Euclidean loss functions (e.g., L2 norm) which perform best when images from two different domains (e.g., visible and thermal) are co-registered and temporally synchronized. This paper proposes a novel domain adaptation framework that combines a new feature mapping sub-network with existing deep feature models, which are based on modified network architectures (e.g., VGG16 or Resnet50). This framework is optimized by introducing new cross-domain identity and domain invariance loss functions for thermal-to-visible face recognition, which alleviates the requirement for precisely co-registered and synchronized imagery. We provide extensive analysis of both features and loss functions used, and compare the proposed domain adaptation framework with state-of-the-art feature based domain adaptation models on a difficult dataset containing facial imagery collected at varying ranges, poses, and expressions. Moreover, we analyze the viability of the proposed framework for more challenging tasks, such as non-frontal thermal-to-visible face recognition
    corecore