863 research outputs found

    Learnable PINs: Cross-Modal Embeddings for Person Identity

    Full text link
    We propose and investigate an identity sensitive joint embedding of face and voice. Such an embedding enables cross-modal retrieval from voice to face and from face to voice. We make the following four contributions: first, we show that the embedding can be learnt from videos of talking faces, without requiring any identity labels, using a form of cross-modal self-supervision; second, we develop a curriculum learning schedule for hard negative mining targeted to this task, that is essential for learning to proceed successfully; third, we demonstrate and evaluate cross-modal retrieval for identities unseen and unheard during training over a number of scenarios and establish a benchmark for this novel task; finally, we show an application of using the joint embedding for automatically retrieving and labelling characters in TV dramas.Comment: To appear in ECCV 201

    A New Multimodal Biometric for Personal Identification

    Get PDF

    Multimodal Behavioral Biometric Authentication in Smartphones for Covid-19 Pandemic

    Get PDF
    The usage of mobile phones has increased multi-fold in recent decades, mostly because of their utility in most aspects of daily life, such as communications, entertainment, and financial transactions. In use cases where users’ information is at risk from imposter attacks, biometrics-based authentication systems such as fingerprint or facial recognition are considered the most trustworthy in comparison to PIN, password, or pattern-based authentication systems in smartphones. Biometrics need to be presented at the time of power-on, they cannot be guessed or attacked through brute force and eliminate the possibility of shoulder surfing. However, fingerprints or facial recognition-based systems in smartphones may not be applicable in a pandemic situation like Covid-19, where hand gloves or face masks are mandatory to protect against unwanted exposure of the body parts. This paper investigates the situations in which fingerprints cannot be utilized due to hand gloves and hence presents an alternative biometric system using the multimodal Touchscreen swipe and Keystroke dynamics pattern. We propose a HandGlove mode of authentication where the system will automatically be triggered to authenticate a user based on Touchscreen swipe and Keystroke dynamics patterns. Our experimental results suggest that the proposed multimodal biometric system can operate with high accuracy. We experiment with different classifiers like Isolation Forest Classifier, SVM, k-NN Classifier, and fuzzy logic classifier with SVM to obtain the best authentication accuracy of 99.55% with 197 users on the Samsung Galaxy S20. We further study the problem of untrained external factors which can impact the user experience of authentication system and propose a model based on fuzzy logic to extend the functionality of the system to improve under novel external effects. In this experiment, we considered the untrained external factor of ‘sanitized hands’ with which the user tries to authenticate and achieved 93.5% accuracy in this scenario. The proposed multimodal system could be one of the most sought approaches for biometrics-based authentication in smartphones in a COVID-19 pandemic situation

    Conceivable security risks and authentication techniques for smart devices

    Get PDF
    With the rapidly escalating use of smart devices and fraudulent transaction of users’ data from their devices, efficient and reliable techniques for authentication of the smart devices have become an obligatory issue. This paper reviews the security risks for mobile devices and studies several authentication techniques available for smart devices. The results from field studies enable a comparative evaluation of user-preferred authentication mechanisms and their opinions about reliability, biometric authentication and visual authentication techniques
    • 

    corecore