67 research outputs found

    LBP-based periocular recognition on challenging face datasets

    Get PDF

    LBP-based periocular recognition on challenging face datasets

    Get PDF
    This work develops a novel face-based matcher composed of a multi-resolution hierarchy of patch-based feature descriptors for periocular recognition - recognition based on the soft tissue surrounding the eye orbit. The novel patch-based framework for periocular recognition is compared against other feature descriptors and a commercial full-face recognition system against a set of four uniquely challenging face corpora. The framework, hierarchical three-patch local binary pattern, is compared against the three-patch local binary pattern and the uniform local binary pattern on the soft tissue area around the eye orbit. Each challenge set was chosen for its particular non-ideal face representations that may be summarized as matching against pose, illumination, expression, aging, and occlusions. The MORPH corpora consists of two mug shot datasets labeled Album 1 and Album 2. The Album 1 corpus is the more challenging of the two due to its incorporation of print photographs (legacy) captured with a variety of cameras from the late 1960s to 1990s. The second challenge dataset is the FRGC still image set. Corpus three, Georgia Tech face database, is a small corpus but one that contains faces under pose, illumination, expression, and eye region occlusions. The final challenge dataset chosen is the Notre Dame Twins database, which is comprised of 100 sets of identical twins and 1 set of triplets. The proposed framework reports top periocular performance against each dataset, as measured by rank-1 accuracy: (1) MORPH Album 1, 33.2%; (2) FRGC, 97.51%; (3) Georgia Tech, 92.4%; and (4) Notre Dame Twins, 98.03%. Furthermore, this work shows that the proposed periocular matcher (using only a small section of the face, about the eyes) compares favorably to a commercial full-face matcher

    One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations

    Full text link
    One weakness of machine-learning algorithms is the need to train the models for a new task. This presents a specific challenge for biometric recognition due to the dynamic nature of databases and, in some instances, the reliance on subject collaboration for data collection. In this paper, we investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition, a biometric recognition task. We analyze the outputs of CNN layers as identity-representing feature vectors. We examine the impact of Domain Adaptation on the network layers' output for unseen data and evaluate the method's robustness concerning data normalization and generalization of the best-performing layer. We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images and fine-tuned for the target periocular dataset by utilizing out-of-the-box CNNs trained for the ImageNet Recognition Challenge and standard computer vision algorithms. For example, for the Cross-Eyed dataset, we could reduce the EER by 67% and 79% (from 1.70% and 3.41% to 0.56% and 0.71%) in the Close-World and Open-World protocols, respectively, for the periocular case. We also demonstrate that traditional algorithms like SIFT can outperform CNNs in situations with limited data or scenarios where the network has not been trained with the test classes like the Open-World mode. SIFT alone was able to reduce the EER by 64% and 71.6% (from 1.7% and 3.41% to 0.6% and 0.97%) for Cross-Eyed in the Close-World and Open-World protocols, respectively, and a reduction of 4.6% (from 3.94% to 3.76%) in the PolyU database for the Open-World and single biometric case.Comment: Submitted preprint to IEE Acces

    Proof-of-Concept

    Get PDF
    Biometry is an area in great expansion and is considered as possible solution to cases where high authentication parameters are required. Although this area is quite advanced in theoretical terms, using it in practical terms still carries some problems. The systems available still depend on a high cooperation level to achieve acceptable performance levels, which was the backdrop to the development of the following project. By studying the state of the art, we propose the creation of a new and less cooperative biometric system that reaches acceptable performance levels.A constante necessidade de parâmetros mais elevados de segurança, nomeadamente ao nível de autenticação, leva ao estudo biometria como possível solução. Actualmente os mecanismos existentes nesta área tem por base o conhecimento de algo que se sabe ”password” ou algo que se possui ”codigo Pin”. Contudo este tipo de informação é facilmente corrompida ou contornada. Desta forma a biometria é vista como uma solução mais robusta, pois garante que a autenticação seja feita com base em medidas físicas ou compartimentais que definem algo que a pessoa é ou faz (”who you are” ou ”what you do”). Sendo a biometria uma solução bastante promissora na autenticação de indivíduos, é cada vez mais comum o aparecimento de novos sistemas biométricos. Estes sistemas recorrem a medidas físicas ou comportamentais, de forma a possibilitar uma autenticação (reconhecimento) com um grau de certeza bastante considerável. O reconhecimento com base no movimento do corpo humano (gait), feições da face ou padrões estruturais da íris, são alguns exemplos de fontes de informação em que os sistemas actuais se podem basear. Contudo, e apesar de provarem um bom desempenho no papel de agentes de reconhecimento autónomo, ainda estão muito dependentes a nível de cooperação exigida. Tendo isto em conta, e tudo o que já existe no ramo do reconhecimento biometrico, esta área está a dar passos no sentido de tornar os seus métodos o menos cooperativos poss??veis. Possibilitando deste modo alargar os seus objectivos para além da mera autenticação em ambientes controlados, para casos de vigilância e controlo em ambientes não cooperativos (e.g. motins, assaltos, aeroportos). É nesta perspectiva que o seguinte projecto surge. Através do estudo do estado da arte, pretende provar que é possível criar um sistema capaz de agir perante ambientes menos cooperativos, sendo capaz de detectar e reconhecer uma pessoa que se apresente ao seu alcance.O sistema proposto PAIRS (Periocular and Iris Recognition Systema) tal como nome indica, efectua o reconhecimento através de informação extraída da íris e da região periocular (região circundante aos olhos). O sistema é construído com base em quatro etapas: captura de dados, pré-processamento, extração de características e reconhecimento. Na etapa de captura de dados, foi montado um dispositivo de aquisição de imagens com alta resolução com a capacidade de capturar no espectro NIR (Near-Infra-Red). A captura de imagens neste espectro tem como principal linha de conta, o favorecimento do reconhecimento através da íris, visto que a captura de imagens sobre o espectro visível seria mais sensível a variações da luz ambiente. Posteriormente a etapa de pré-processamento implementada, incorpora todos os módulos do sistema responsáveis pela detecção do utilizador, avaliação de qualidade de imagem e segmentação da íris. O modulo de detecção é responsável pelo desencadear de todo o processo, uma vez que esta é responsável pela verificação da exist?ncia de um pessoa em cena. Verificada a sua exist?ncia, são localizadas as regiões de interesse correspondentes ? íris e ao periocular, sendo também verificada a qualidade com que estas foram adquiridas. Concluídas estas etapas, a íris do olho esquerdo é segmentada e normalizada. Posteriormente e com base em vários descritores, é extraída a informação biométrica das regiões de interesse encontradas, e é criado um vector de características biométricas. Por fim, é efectuada a comparação dos dados biometricos recolhidos, com os já armazenados na base de dados, possibilitando a criação de uma lista com os níveis de semelhança em termos biometricos, obtendo assim um resposta final do sistema. Concluída a implementação do sistema, foi adquirido um conjunto de imagens capturadas através do sistema implementado, com a participação de um grupo de voluntários. Este conjunto de imagens permitiu efectuar alguns testes de desempenho, verificar e afinar alguns parâmetros, e proceder a optimização das componentes de extração de características e reconhecimento do sistema. Analisados os resultados foi possível provar que o sistema proposto tem a capacidade de exercer as suas funções perante condições menos cooperativas

    Recognizing Surgically Altered Face Images and 3D Facial Expression Recognition

    Get PDF
    AbstractAltering Facial appearances using surgical procedures are common now days. But it raised challenges for face recognition algorithms. Plastic surgery introduces non linear variations. Because of these variations it is difficult to be modeled by the existing face recognition system. Here presents a multi objective evolutionary granular algorithm. It operates on several granules extracted from a face images at multiple level of granularity. This granular information is unified in an evolutionary manner using multi objective genetic approach. Then identify the facial expression from the face images. For that 3D facial shapes are considering here. A novel automatic feature selection method is proposed based on maximizing the average relative entropy of marginalized class-conditional feature distributions and apply it to a complete pool of candidate features composed of normalized Euclidian distances between 83 facial feature points in the 3D space. A regularized multi-class AdaBoost classification algorithm is used here to get the highest average recognition rate

    Periocular Region-Based Biometric Identification

    Get PDF
    As biometrics become more prevalent in society, the research area is expected to address an ever widening field of problems and conditions. Traditional biometric modalities and approaches are reaching a state of maturity, and their limits are clearly defined. Since the needs of a biometric system administrator might extend beyond those limits, new modalities and techniques must address such concerns. The goal of the work presented here is to explore the periocular region, the region surrounding the eye, and evaluate its usability and limitations in addressing these concerns. First, a study of the periocular region was performed to examine its feasibility in addressing problems that affect traditional face- and iris-based biometric systems. Second, the physical structure of the periocular region was analyzed to determine the kinds of features found there and how they influence the performance of a biometric recognition system. Third, the use of local appearance based approaches in periocular recognition was explored. Lastly, the knowledge gained from the previous experiments was used to develop a novel feature representation technique that is specific to the periocular region. This work is significant because it provides a novel analysis of the features found in the periocular region and produces a feature extraction method that resulted in higher recognition performance over traditional techniques

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
    corecore