2,378 research outputs found

    Multimodal person recognition for human-vehicle interaction

    Get PDF
    Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies

    Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues

    Full text link
    We explore the task of recognizing peoples' identities in photo albums in an unconstrained setting. To facilitate this, we introduce the new People In Photo Albums (PIPA) dataset, consisting of over 60000 instances of 2000 individuals collected from public Flickr photo albums. With only about half of the person images containing a frontal face, the recognition task is very challenging due to the large variations in pose, clothing, camera viewpoint, image resolution and illumination. We propose the Pose Invariant PErson Recognition (PIPER) method, which accumulates the cues of poselet-level person recognizers trained by deep convolutional networks to discount for the pose variations, combined with a face recognizer and a global recognizer. Experiments on three different settings confirm that in our unconstrained setup PIPER significantly improves on the performance of DeepFace, which is one of the best face recognizers as measured on the LFW dataset

    Idiosyncratic body motion influences person recognition

    Get PDF
    Person recognition is an important human ability. The main source of information we use to recognize people is the face. However, there is a variety of other information that contributes to person recognition, and the face is almost exclusively perceived in the presence of a moving body. Here, we used recent motion capture and computer animation techniques to quantitatively explore the impact of body motion on person recognition. Participants were familiarized with two animated avatars each performing the same basic sequence of karate actions with slight idiosyncratic differences in the body movements. The body of both avatars was the same, but they differed in their facial identity and body movements. In a subsequent recognition task, participants saw avatars whose facial identity consisted of morphs between the learned individuals. Across trials, each avatar was seen animated with sequences taken from both of the learned movement patterns. Participants were asked to judge the identity of the avatars. The avatars that contained the two original heads were predominantly identified by their facial identity regardless of body motion. More importantly however, participants identified the ambiguous avatar primarily based on its body motion. This clearly shows that body motion can affect the perception of identity. Our results also highlight the importance of taking into account the face in the context of a body rather than solely concentrating on facial information for person recognition.peer-reviewe

    Millimetre wave person recognition: hand-crafted vs learned features

    Full text link
    Imaging using millimeter waves (mmWs) has many advantages including ability to penetrate obscurants such as clothes and polymers. Although conceal weapon detection has been the predominant mmW imaging application, in this paper, we aim to gain some insight about the potential of using mmW images for person recognition. We report experimental results using the mmW TNO database consisting of 50 individuals based on both hand-crafted and learned features from Alexnet and VGG-face pretrained CNN models. Results suggest that: i) mmW torso region is more discriminative than mmW face and the entire body, ii) CNN features produce better results compared to hand-crafted features on mmW faces and the entire body, and iii) hand-crafted features slightly outperform CNN features on mmW torsoThis work has been partially supported by project CogniMetrics TEC2015-70627-R (MINECO/FEDER), and the SPATEK network (TEC2015-68766-REDC). E. GonzalezSosa is supported by a PhD scholarship from Universidad Autonoma de Madrid. Vishal M. Patel was partially supported by US Office of Naval Research (ONR) Grant YIP N00014-16-1-3134. Authors wish to thank also TNO for providing access to the databas

    Content-Based Video Retrieval in Historical Collections of the German Broadcasting Archive

    Full text link
    The German Broadcasting Archive (DRA) maintains the cultural heritage of radio and television broadcasts of the former German Democratic Republic (GDR). The uniqueness and importance of the video material stimulates a large scientific interest in the video content. In this paper, we present an automatic video analysis and retrieval system for searching in historical collections of GDR television recordings. It consists of video analysis algorithms for shot boundary detection, concept classification, person recognition, text recognition and similarity search. The performance of the system is evaluated from a technical and an archival perspective on 2,500 hours of GDR television recordings.Comment: TPDL 2016, Hannover, Germany. Final version is available at Springer via DO

    The Use of EEG Signals For Biometric Person Recognition

    Get PDF
    This work is devoted to investigating EEG-based biometric recognition systems. One potential advantage of using EEG signals for person recognition is the difficulty in generating artificial signals with biometric characteristics, thus making the spoofing of EEG-based biometric systems a challenging task. However, more works needs to be done to overcome certain drawbacks that currently prevent the adoption of EEG biometrics in real-life scenarios: 1) usually large number of employed sensors, 2) still relatively low recognition rates (compared with some other biometric modalities), 3) the template ageing effect. The existing shortcomings of EEG biometrics and their possible solutions are addressed from three main perspectives in the thesis: pre-processing, feature extraction and pattern classification. In pre-processing, task (stimuli) sensitivity and noise removal are investigated and discussed in separated chapters. For feature extraction, four novel features are proposed; for pattern classification, a new quality filtering method, and a novel instance-based learning algorithm are described in respective chapters. A self-collected database (Mobile Sensor Database) is employed to investigate some important biometric specified effects (e.g. the template ageing effect; using low-cost sensor for recognition). In the research for pre-processing, a training data accumulation scheme is developed, which improves the recognition performance by combining the data of different mental tasks for training; a new wavelet-based de-noising method is developed, its effectiveness in person identification is found to be considerable. Two novel features based on Empirical Mode Decomposition and Hilbert Transform are developed, which provided the best biometric performance amongst all the newly proposed features and other state-of-the-art features reported in the thesis; the other two newly developed wavelet-based features, while having slightly lower recognition accuracies, were computationally more efficient. The quality filtering algorithm is designed to employ the most informative EEG signal segments: experimental results indicate using a small subset of the available data for feature training could receive reasonable improvement in identification rate. The proposed instance-based template reconstruction learning algorithm has shown significant effectiveness when tested using both the publicly available and self-collected databases

    Report on the BTAS 2016 Video Person Recognition Evaluation

    Full text link
    © 2016 IEEE. This report presents results from the Video Person Recognition Evaluation held in conjunction with the 8th IEEE International Conference on Biometrics: Theory, Applications, and Systems (BTAS). Two experiments required algorithms to recognize people in videos from the Point-and-Shoot Face Recognition Challenge Problem (PaSC). The first consisted of videos from a tripod mounted high quality video camera. The second contained videos acquired from 5 different handheld video cameras. There were 1,401 videos in each experiment of 265 subjects. The subjects, the scenes, and the actions carried out by the people are the same in both experiments. An additional experiment required algorithms to recognize people in videos from the Video Database of Moving Faces and People (VDMFP). There were 958 videos in this experiment of 297 subjects. Four groups from around the world participated in the evaluation. The top verification rate for PaSC from this evaluation is 0.98 at a false accept rate of 0.01 - a remarkable advancement in performance from the competition held at FG 2015

    Perceived ability and actual recognition accuracy for unfamiliar and famous faces

    Get PDF
    In forensic person recognition tasks, mistakes in the identification of unfamiliar faces occur frequently. This study explored whether these errors might arise because observers are poor at judging their ability to recognize unfamiliar faces, and also whether they might conflate the recognition of familiar and unfamiliar faces. Across two experiments, we found that observers could predict their ability to recognize famous but not unfamiliar faces. Moreover, observers seemed to partially conflate these abilities by adjusting ability judgements for famous faces after a test of unfamiliar face recognition (Experiment 1) and vice versa (Experiment 2). These findings suggest that observers have limited insight into their ability to identify unfamiliar faces. These experiments also show that judgements of recognition abilities are malleable and can generalize across different face categories
    corecore