11,172 research outputs found

    Multimodal person recognition for human-vehicle interaction

    Get PDF
    Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies

    One-shot lip-based biometric authentication: extending behavioral features with authentication phrase information

    Full text link
    Lip-based biometric authentication (LBBA) is an authentication method based on a person's lip movements during speech in the form of video data captured by a camera sensor. LBBA can utilize both physical and behavioral characteristics of lip movements without requiring any additional sensory equipment apart from an RGB camera. State-of-the-art (SOTA) approaches use one-shot learning to train deep siamese neural networks which produce an embedding vector out of these features. Embeddings are further used to compute the similarity between an enrolled user and a user being authenticated. A flaw of these approaches is that they model behavioral features as style-of-speech without relation to what is being said. This makes the system vulnerable to video replay attacks of the client speaking any phrase. To solve this problem we propose a one-shot approach which models behavioral features to discriminate against what is being said in addition to style-of-speech. We achieve this by customizing the GRID dataset to obtain required triplets and training a siamese neural network based on 3D convolutions and recurrent neural network layers. A custom triplet loss for batch-wise hard-negative mining is proposed. Obtained results using an open-set protocol are 3.2% FAR and 3.8% FRR on the test set of the customized GRID dataset. Additional analysis of the results was done to quantify the influence and discriminatory power of behavioral and physical features for LBBA.Comment: 28 pages, 10 figures, 7 table

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Classification of Humans into Ayurvedic Prakruti Types using Computer Vision

    Get PDF
    Ayurveda, a 5000 years old Indian medical science, believes that the universe and hence humans are made up of five elements namely ether, fire, water, earth, and air. The three Doshas (Tridosha) Vata, Pitta, and Kapha originated from the combinations of these elements. Every person has a unique combination of Tridosha elements contributing to a person’s ‘Prakruti’. Prakruti governs the physiological and psychological tendencies in all living beings as well as the way they interact with the environment. This balance influences their physiological features like the texture and colour of skin, hair, eyes, length of fingers, the shape of the palm, body frame, strength of digestion and many more as well as the psychological features like their nature (introverted, extroverted, calm, excitable, intense, laidback), and their reaction to stress and diseases. All these features are coded in the constituents at the time of a person’s creation and do not change throughout their lifetime. Ayurvedic doctors analyze the Prakruti of a person either by assessing the physical features manually and/or by examining the nature of their heartbeat (pulse). Based on this analysis, they diagnose, prevent and cure the disease in patients by prescribing precision medicine. This project focuses on identifying Prakruti of a person by analysing his facial features like hair, eyes, nose, lips and skin colour using facial recognition techniques in computer vision. This is the first of its kind research in this problem area that attempts to bring image processing into the domain of Ayurveda

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    Fusion of Multiple Biometric For Photo-Attack Detection in Face Recognition Systems

    Get PDF
    A spoofing attack is a situation in which one person successfully masquerades as another by falsifying data and gaining illegitimate access. Spoofing attacks are of several types such as photograph, video or mask. Biometrics are playing the role of a password which cannot be replaced if stolen, so there is the necessity of counter-measures to biometric spoofing attacks. Face biometric systems are vulnerable to spoofing attack. Regardless of the biometric mode, the typical approach of anti-spoofing systems is to classify the biometric evidence which are based on features discriminating between real accesses and spoofing attacks. A number of biometric characteristics are in use in various applications. This system will be based on face recognition and lip movement recognition systems. This system will make use of client-specific information to build client-specific anti-spoofing solution, depending on a generative model. In this system, we will implement the client identity to detect spoofing attack. With this, it increases efficiency of authentication. The image will be captured and registered with its client identity. When user has to be authenticated, the image will be captured with his identity manually entered. Now system will check the image with respect to client identity only. Lip movement recognition will be done at time of authentication to identify whether client is spoof or not. If client is authenticated, then it will check for captured image dimension using Gaussian Mixture Model (GMM). This system also encrypts and decrypts a file by extracting parameter values of a registered face

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi

    Survey Analysis on Secured user Authentication through Biometric Recognition

    Get PDF
    Secured user authentication is the process of verifying the user authenticity. Biometric authentication is the human identification system employed to match the biometric characteristics of user for verifying the authenticity. Biometric identifiers are exclusive, making it harder to hack accounts using them. Common types of biometrics comprise the fingerprint scanning verifies authentication based on a user's fingerprints Face recognition and voice recognition are employed in real-time application for improving the security level in different application scenarios. Face recognition is a method of identifying or verifying the individual identity using their face expression. Voice recognition is the ability of machine to receive and interpret the dictation to understand. Many researchers carried out their research on different face and voice recognition methods. But, recognition accuracy was not improved with minimum time consumption by existing biometric recognition method. In this research, different recognition methods are reviewed using biometric recognition method for user authentication. The recognition methods are efficiently on human faces dataset with respect to performance metrics like recognition accuracy, error rate, and recognition time
    corecore