20 research outputs found

    Proof-of-Concept

    Get PDF
    Biometry is an area in great expansion and is considered as possible solution to cases where high authentication parameters are required. Although this area is quite advanced in theoretical terms, using it in practical terms still carries some problems. The systems available still depend on a high cooperation level to achieve acceptable performance levels, which was the backdrop to the development of the following project. By studying the state of the art, we propose the creation of a new and less cooperative biometric system that reaches acceptable performance levels.A constante necessidade de parâmetros mais elevados de segurança, nomeadamente ao nível de autenticação, leva ao estudo biometria como possível solução. Actualmente os mecanismos existentes nesta área tem por base o conhecimento de algo que se sabe ”password” ou algo que se possui ”codigo Pin”. Contudo este tipo de informação é facilmente corrompida ou contornada. Desta forma a biometria é vista como uma solução mais robusta, pois garante que a autenticação seja feita com base em medidas físicas ou compartimentais que definem algo que a pessoa é ou faz (”who you are” ou ”what you do”). Sendo a biometria uma solução bastante promissora na autenticação de indivíduos, é cada vez mais comum o aparecimento de novos sistemas biométricos. Estes sistemas recorrem a medidas físicas ou comportamentais, de forma a possibilitar uma autenticação (reconhecimento) com um grau de certeza bastante considerável. O reconhecimento com base no movimento do corpo humano (gait), feições da face ou padrões estruturais da íris, são alguns exemplos de fontes de informação em que os sistemas actuais se podem basear. Contudo, e apesar de provarem um bom desempenho no papel de agentes de reconhecimento autónomo, ainda estão muito dependentes a nível de cooperação exigida. Tendo isto em conta, e tudo o que já existe no ramo do reconhecimento biometrico, esta área está a dar passos no sentido de tornar os seus métodos o menos cooperativos poss??veis. Possibilitando deste modo alargar os seus objectivos para além da mera autenticação em ambientes controlados, para casos de vigilância e controlo em ambientes não cooperativos (e.g. motins, assaltos, aeroportos). É nesta perspectiva que o seguinte projecto surge. Através do estudo do estado da arte, pretende provar que é possível criar um sistema capaz de agir perante ambientes menos cooperativos, sendo capaz de detectar e reconhecer uma pessoa que se apresente ao seu alcance.O sistema proposto PAIRS (Periocular and Iris Recognition Systema) tal como nome indica, efectua o reconhecimento através de informação extraída da íris e da região periocular (região circundante aos olhos). O sistema é construído com base em quatro etapas: captura de dados, pré-processamento, extração de características e reconhecimento. Na etapa de captura de dados, foi montado um dispositivo de aquisição de imagens com alta resolução com a capacidade de capturar no espectro NIR (Near-Infra-Red). A captura de imagens neste espectro tem como principal linha de conta, o favorecimento do reconhecimento através da íris, visto que a captura de imagens sobre o espectro visível seria mais sensível a variações da luz ambiente. Posteriormente a etapa de pré-processamento implementada, incorpora todos os módulos do sistema responsáveis pela detecção do utilizador, avaliação de qualidade de imagem e segmentação da íris. O modulo de detecção é responsável pelo desencadear de todo o processo, uma vez que esta é responsável pela verificação da exist?ncia de um pessoa em cena. Verificada a sua exist?ncia, são localizadas as regiões de interesse correspondentes ? íris e ao periocular, sendo também verificada a qualidade com que estas foram adquiridas. Concluídas estas etapas, a íris do olho esquerdo é segmentada e normalizada. Posteriormente e com base em vários descritores, é extraída a informação biométrica das regiões de interesse encontradas, e é criado um vector de características biométricas. Por fim, é efectuada a comparação dos dados biometricos recolhidos, com os já armazenados na base de dados, possibilitando a criação de uma lista com os níveis de semelhança em termos biometricos, obtendo assim um resposta final do sistema. Concluída a implementação do sistema, foi adquirido um conjunto de imagens capturadas através do sistema implementado, com a participação de um grupo de voluntários. Este conjunto de imagens permitiu efectuar alguns testes de desempenho, verificar e afinar alguns parâmetros, e proceder a optimização das componentes de extração de características e reconhecimento do sistema. Analisados os resultados foi possível provar que o sistema proposto tem a capacidade de exercer as suas funções perante condições menos cooperativas

    Learning Efficient Deep Feature Extraction For Mobile Ocular Biometrics

    Get PDF
    Title from PDF of title page viewed March 4, 2021Dissertation advisors: Reza Derakhshani and Cory BeardVitaIncludes bibliographical references (page 137-149)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2020Ocular biometrics uses physical traits from eye regions such as iris, conjunctival vasculature, and periocular for recognizing the person. Ocular biometrics has gained popularity amongst research and industry alike for its identification capabilities, security, and simplicity in the acquisition, even using a mobile phone's selfie camera. With the rapid advancement in hardware and deep learning technologies, better performances have been obtained using Convolutional Neural Networks(CNN) for feature extraction and person recognition. Most of the early works proposed using large CNNs for ocular recognition in subject-dependent evaluation, where the subjects overlap between the training and testing set. This is difficult to scale for the large population as the CNN model needs to be re-trained every time a new subject is enrolled in the database. Also, many of the proposed CNN models are large, which renders them memory intensive and computationally costly to deploy on a mobile device. In this work, we propose CNN based robust subject-independent feature extraction for ocular biometric recognition, which is memory and computation efficient. We evaluated our proposed method on various ocular biometric datasets in the subject-independent, cross-dataset, and cross-illumination protocols.Introduction -- Previous Work -- Calculating CNN Models Computational Efficiency -- Case Study of Deep Learning Models in Ocular Biometrics -- OcularNet Model -- OcularNet-v2: Self-learned ROI detection with deep features -- LOD-V: Large Ocular Biometrics Dataset in Visible Spectrum -- Conclusion and Future Work -- Appendix A. Supplementary Materials for Chapter 4 -- Appendix B. Supplementary Materials for Chapter 5 -- Appendix C.Supplementary Materials for Chapter 6 -- Appendix D. Supplementary Materials for Chapter 7xxii, 150 page

    What else does your biometric data reveal? A survey on soft biometrics

    Get PDF
    International audienceRecent research has explored the possibility of extracting ancillary information from primary biometric traits, viz., face, fingerprints, hand geometry and iris. This ancillary information includes personal attributes such as gender, age, ethnicity, hair color, height, weight, etc. Such attributes are known as soft biometrics and have applications in surveillance and indexing biometric databases. These attributes can be used in a fusion framework to improve the matching accuracy of a primary biometric system (e.g., fusing face with gender information), or can be used to generate qualitative descriptions of an individual (e.g., "young Asian female with dark eyes and brown hair"). The latter is particularly useful in bridging the semantic gap between human and machine descriptions of biometric data. In this paper, we provide an overview of soft biometrics and discuss some of the techniques that have been proposed to extract them from image and video data. We also introduce a taxonomy for organizing and classifying soft biometric attributes, and enumerate the strengths and limitations of these attributes in the context of an operational biometric system. Finally, we discuss open research problems in this field. This survey is intended for researchers and practitioners in the field of biometrics

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Skin Texture as a Source of Biometric Information

    Get PDF
    Traditional face recognition systems have achieved remarkable performances when the whole face image is available. However, recognising people from partial view of their facial image is a challenging task. Face recognition systems' performances may also be degraded due to low resolution image quality. These limitations can restrict the practicality of such systems in real-world scenarios such as surveillance, and forensic applications. Therefore, there is a need to identify people from whatever information is available and one of the possible approaches would be to use the texture information from available facial skin regions for the biometric identification of individuals. This thesis presents the design, implementation and experimental evaluation of an automated skin-based biometric framework. The proposed system exploits the skin information from facial regions for person recognition. Such a system is applicable where only a partial view of a face is captured by imaging devices. The system automatically detects the regions of interest by using a set of facial landmarks. Four regions were investigated in this study: forehead, right cheek, left cheek, and chin. A skin purity assessment scheme determines whether the region of interest contains enough skin pixels for biometric analysis. Texture features were extracted from non-overlapping sub-regions and categorised using a number of classification schemes. To further improve the reliability of the system, the study also investigated various techniques to deal with the challenge where the face images may be acquired at different resolutions to that available at the time of enrolment or sub-regions themselves be partially occluded. The study also presented an adaptive scheme for exploiting the available information from the corrupt regions of interest. Extensive experiments were conducted using publicly available databases to evaluate both the performance of the prototype system and the adaptive framework for different operational conditions, such as level of occlusion and mixture of different resolution skin images. Results suggest that skin information can provide useful discriminative characteristics for individual identification. The comparison analyses with state-of-the-art methods show that the proposed system achieved a promising performance

    Clamp-assisted retractor advancement for lower eyelid involutional entropion

    Get PDF
    Scientific Poster 144PURPOSE: To describe a novel approach to internal repair of lower lid entropion using the Putterman clamp. METHODS: Retrospective, consecutive case series of patients with entropion who underwent retractor advancement using the clamp. RESULTS: Seven eyes of 6 patients (average age: 80; 4 women and 2 men) were analyzed. Complete resolution was achieved in 5 of the 6 patients (83.3%). The 1 patient with recurrence had 2 previous entropion surgeries on each eye over the past 4 years; there was lid laxity, and horizontal tightening was needed. No severe adverse events occurred in the patients. CONCLUSION: Clamp-assisted lower lid retractor advancement offers a safe and effective, minimally invasive approach to involutional entropion. Further study is needed to assess its role in recurrent entropion.postprin

    Handbook of Digital Face Manipulation and Detection

    Get PDF
    This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area

    An Investigation of Computer Vision Syndrome with Smart Devices

    Get PDF
    The overarching theme of the thesis was to investigate the association between smart device use and computer vision syndrome. The initial study designed and developed the Open Field Tear film Analyser (OFTA) enabling a continuous, real-time assessment of the tear film and blink characteristics during smart device use. The monocular OFTA prototype was validated and showed good intra- and inter-observer repeatability relative to the Oculus Keratograph 5M and Bausch and Lomb one position keratometer. Subsequently, tear osmolarity following engagement with reading and gaming tasks on smart device and paper platforms was investigated. Discrete measures of osmolarity pre- and post-engagement with the tasks were obtained with the TearLab osmometer; osmolarity values differed between platforms when participants were engaged in a gaming task but no such difference was observed with the reading task. In addition, the influence of repeated measurements on tear osmolarity was also explored. To simulate the habitual binocular viewing conditions normally associated with smart device use, the binocular OFTA was developed. The device was used to assess the tear film and blink characteristics whilst engaging with reading and gaming tasks on smart device and paper platforms. The results revealed differences in blink characteristics and non-invasive tear break up time between the different platforms and tasks assessed. In addition, the thesis also reports on an investigation examining the real-time accommodative response to various targets displayed on smart devices using an open-field autorefractor with a Badal lens system adaptation. The results showed that accommodative latency, accommodative lag, mean velocity of accommodation, speed of disaccommodation and mean velocity of disaccommodation varied across the different platforms. Through the use of validated subjective questionnaires and smartphone apps, the relationship between duration of smartphone use and symptoms of dry eye were examined. The findings of this study demonstrated that longer duration of smartphone and personal computer use were associated with higher risk of dry eyes as indicated by subjective questionnaire outcomes.Ministry of Higher Education, MalaysiaInternational Islamic University Malaysi
    corecore