2,703 research outputs found

    A Survey of Iris Recognition System

    Get PDF
    The uniqueness of iris texture makes it one of the reliable physiological biometric traits compare to the other biometric traits. In this paper, we investigate a different level of fusion approach in iris image. Although, a number of iris recognition methods has been proposed in recent years, however most of them focus on the feature extraction and classification method. Less number of method focuses on the information fusion of iris images. Fusion is believed to produce a better discrimination power in the feature space, thus we conduct an analysis to investigate which fusion level is able to produce the best result for iris recognition system. Experimental analysis using CASIA dataset shows feature level fusion produce 99% recognition accuracy. The verification analysis shows the best result is GAR = 95% at the FRR = 0.1

    Multispectral palmprint recognition using Pascal coefficients-based LBP and PHOG descriptors with random sampling

    Get PDF
    Local binary pattern (LBP) algorithm and its variants have been used extensively to analyse the local textural features of digital images with great success. Numerous extensions of LBP descriptors have been suggested, focusing on improving their robustness to noise and changes in image conditions. In our research, inspired by the concepts of LBP feature descriptors and a random sampling subspace, we propose an ensemble learning framework, using a variant of LBP constructed from Pascal’s coefficients of n-order and referred to as a multiscale local binary pattern. To address the inherent overfitting problem of linear discriminant analysis, PCA was applied to the training samples. Random sampling was used to generate multiple feature subsets. In addition, in this work, we propose a new feature extraction technique that combines the pyramid histogram of oriented gradients and LBP, where the features are concatenated for use in the classification. Its performance in recognition was evaluated using the Hong Kong Polytechnic University database. Extensive experiments unmistakably show the superiority of the proposed approach compared to state-of-the-art techniques

    Subspace Representations for Robust Face and Facial Expression Recognition

    Get PDF
    Analyzing human faces and modeling their variations have always been of interest to the computer vision community. Face analysis based on 2D intensity images is a challenging problem, complicated by variations in pose, lighting, blur, and non-rigid facial deformations due to facial expressions. Among the different sources of variation, facial expressions are of interest as important channels of non-verbal communication. Facial expression analysis is also affected by changes in view-point and inter-subject variations in performing different expressions. This dissertation makes an attempt to address some of the challenges involved in developing robust algorithms for face and facial expression recognition by exploiting the idea of proper subspace representations for data. Variations in the visual appearance of an object mostly arise due to changes in illumination and pose. So we first present a video-based sequential algorithm for estimating the face albedo as an illumination-insensitive signature for face recognition. We show that by knowing/estimating the pose of the face at each frame of a sequence, the albedo can be efficiently estimated using a Kalman filter. Then we extend this to the case of unknown pose by simultaneously tracking the pose as well as updating the albedo through an efficient Bayesian inference method performed using a Rao-Blackwellized particle filter. Since understanding the effects of blur, especially motion blur, is an important problem in unconstrained visual analysis, we then propose a blur-robust recognition algorithm for faces with spatially varying blur. We model a blurred face as a weighted average of geometrically transformed instances of its clean face. We then build a matrix, for each gallery face, whose column space spans the space of all the motion blurred images obtained from the clean face. This matrix representation is then used to define a proper objective function and perform blur-robust face recognition. To develop robust and generalizable models for expression analysis one needs to break the dependence of the models on the choice of the coordinate frame of the camera. To this end, we build models for expressions on the affine shape-space (Grassmann manifold), as an approximation to the projective shape-space, by using a Riemannian interpretation of deformations that facial expressions cause on different parts of the face. This representation enables us to perform various expression analysis and recognition algorithms without the need for pose normalization as a preprocessing step. There is a large degree of inter-subject variations in performing various expressions. This poses an important challenge on developing robust facial expression recognition algorithms. To address this challenge, we propose a dictionary-based approach for facial expression analysis by decomposing expressions in terms of action units (AUs). First, we construct an AU-dictionary using domain experts' knowledge of AUs. To incorporate the high-level knowledge regarding expression decomposition and AUs, we then perform structure-preserving sparse coding by imposing two layers of grouping over AU-dictionary atoms as well as over the test image matrix columns. We use the computed sparse code matrix for each expressive face to perform expression decomposition and recognition. Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. We propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component, which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm, which benefits from the idea of sparsity and morphological diversity. The DCS algorithm uses the data-driven dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition

    Enhanced CNN for image denoising

    Full text link
    Owing to flexible architectures of deep convolutional neural networks (CNNs), CNNs are successfully used for image denoising. However, they suffer from the following drawbacks: (i) deep network architecture is very difficult to train. (ii) Deeper networks face the challenge of performance saturation. In this study, the authors propose a novel method called enhanced convolutional neural denoising network (ECNDNet). Specifically, they use residual learning and batch normalisation techniques to address the problem of training difficulties and accelerate the convergence of the network. In addition, dilated convolutions are used in the proposed network to enlarge the context information and reduce the computational cost. Extensive experiments demonstrate that the ECNDNet outperforms the state-of-the-art methods for image denoising.Comment: CAAI Transactions on Intelligence Technology[J], 201

    False-positive reduction in mammography using multiscale spatial Weber law descriptor and support vector machines

    Get PDF
    In a CAD system for the detection of masses, segmentation of mammograms yields regions of interest (ROIs), which are not only true masses but also suspicious normal tissues that result in false positives. We introduce a new method for false-positive reduction in this paper. The key idea of our approach is to exploit the textural properties of mammograms and for texture description, to use Weber law descriptor (WLD), which outperforms state-of-the-art best texture descriptors. The basic WLD is a holistic descriptor by its construction because it integrates the local information content into a single histogram, which does not take into account the spatial locality of micropatterns. We extend it into a multiscale spatial WLD (MSWLD) that better characterizes the texture micro structures of masses by incorporating the spatial locality and scale of microstructures. The dimension of the feature space generated by MSWLD becomes high; it is reduced by selecting features based on their significance. Finally, support vector machines are employed to classify ROIs as true masses or normal parenchyma. The proposed approach is evaluated using 1024 ROIs taken from digital database for screening mammography and an accuracy of Az = 0.99 ± 0.003 (area under receiver operating characteristic curve) is obtained. A comparison reveals that the proposed method has significant improvement over the state-of-the-art best methods for false-positive reduction problem

    Curvelet Based Feature Extraction

    Get PDF
    • …
    corecore