225,091 research outputs found

    Comparison of fusion methods for thermo-visual surveillance tracking

    Get PDF
    In this paper, we evaluate the appearance tracking performance of multiple fusion schemes that combine information from standard CCTV and thermal infrared spectrum video for the tracking of surveillance objects, such as people, faces, bicycles and vehicles. We show results on numerous real world multimodal surveillance sequences, tracking challenging objects whose appearance changes rapidly. Based on these results we can determine the most promising fusion scheme

    Face tracking using a hyperbolic catadioptric omnidirectional system

    Get PDF
    In the first part of this paper, we present a brief review on catadioptric omnidirectional systems. The special case of the hyperbolic omnidirectional system is analysed in depth. The literature shows that a hyperboloidal mirror has two clear advantages over alternative geometries. Firstly, a hyperboloidal mirror has a single projection centre [1]. Secondly, the image resolution is uniformly distributed along the mirror’s radius [2]. In the second part of this paper we show empirical results for the detection and tracking of faces from the omnidirectional images using Viola-Jones method. Both panoramic and perspective projections, extracted from the omnidirectional image, were used for that purpose. The omnidirectional image size was 480x480 pixels, in greyscale. The tracking method used regions of interest (ROIs) set as the result of the detections of faces from a panoramic projection of the image. In order to avoid losing or duplicating detections, the panoramic projection was extended horizontally. Duplications were eliminated based on the ROIs established by previous detections. After a confirmed detection, faces were tracked from perspective projections (which are called virtual cameras), each one associated with a particular face. The zoom, pan and tilt of each virtual camera was determined by the ROIs previously computed on the panoramic image. The results show that, when using a careful combination of the two projections, good frame rates can be achieved in the task of tracking faces reliably

    Seeing two faces together: preference formation in humans and rhesus macaques

    Get PDF
    Humans, great apes and old world monkeys show selective attention to faces depending on conspecificity, familiarity, and social status supporting the view that primates share similar face processing mechanisms. Although many studies have been done on face scanning strategy in monkeys and humans, the mechanisms influencing viewing preference have received little attention. To determine how face categories influence viewing preference in humans and rhesus macaques (Macaca mulatta), we performed two eye-tracking experiments using a visual preference task whereby pairs of faces from different species were presented simultaneously. The results indicated that viewing time was significantly influenced by the pairing of the face categories. Humans showed a strong bias towards an own-race face in an Asian–Caucasian condition. Rhesus macaques directed more attention towards non-human primate faces when they were paired with human faces, regardless of the species. When rhesus faces were paired with faces from Barbary macaques (Macaca sylvanus) or chimpanzees (Pan troglodytes), the novel species’ faces attracted more attention. These results indicate that monkeys’ viewing preferences, as assessed by a visual preference task, are modulated by several factors, species and dominance being the most influential

    A morphological approach for segmentation and tracking of human faces

    Get PDF
    A new technique for segmenting and tracking human faces in video sequences is presented. The technique relies on morphological tools such as using connected operators to extract the connected component that more likely belongs to a face, and partition projection to track this component through the sequence. A binary partition tree (BPT) is used to implement the connected operator. The BPT is constructed based on the chrominance criteria and its nodes are analyzed so that the selected node maximizes an estimation of the likelihood of being part of a face. The tracking is performed using a partition projection approach. Images are divided into face and non-face parts, which are tracked through the sequence. The technique has been successfully assessed using several test sequences from the MPEG-4 (raw format) and the MPEG-7 databases (MPEG-1 format).Peer ReviewedPostprint (published version

    Attentional Bias to Facial Expressions of Different Emotions – A Cross-Cultural Comparison of ≠Akhoe Hai||om and German Children and Adolescents

    Get PDF
    The attentional bias to negative information enables humans to quickly identify and to respond appropriately to potentially threatening situations. Because of its adaptive function, the enhanced sensitivity to negative information is expected to represent a universal trait, shared by all humans regardless of their cultural background. However, existing research focuses almost exclusively on humans from Western industrialized societies, who are not representative for the human species. Therefore, we compare humans from two distinct cultural contexts: adolescents and children from Germany, a Western industrialized society, and from the ≠Akhoe Hai||om, semi-nomadic hunter-gatherers in Namibia. We predicted that both groups show an attentional bias toward negative facial expressions as compared to neutral or positive faces. We used eye-tracking to measure their fixation duration on facial expressions depicting different emotions, including negative (fear, anger), positive (happy), and neutral faces. Both Germans and the ≠Akhoe Hai||om gazed longer at fearful faces, but shorter on angry faces, challenging the notion of a general bias toward negative emotions. For happy faces, fixation durations varied between the two groups, suggesting more flexibility in the response to positive emotions. Our findings emphasize the need for placing research on emotion perception into an evolutionary, cross-cultural comparative framework that considers the adaptive significance of specific emotions, rather than differentiating between positive and negative information, and enables systematic comparisons across participants from diverse cultural backgrounds

    MobiFace: A Novel Dataset for Mobile Face Tracking in the Wild

    Full text link
    Face tracking serves as the crucial initial step in mobile applications trying to analyse target faces over time in mobile settings. However, this problem has received little attention, mainly due to the scarcity of dedicated face tracking benchmarks. In this work, we introduce MobiFace, the first dataset for single face tracking in mobile situations. It consists of 80 unedited live-streaming mobile videos captured by 70 different smartphone users in fully unconstrained environments. Over 95K95K bounding boxes are manually labelled. The videos are carefully selected to cover typical smartphone usage. The videos are also annotated with 14 attributes, including 6 newly proposed attributes and 8 commonly seen in object tracking. 36 state-of-the-art trackers, including facial landmark trackers, generic object trackers and trackers that we have fine-tuned or improved, are evaluated. The results suggest that mobile face tracking cannot be solved through existing approaches. In addition, we show that fine-tuning on the MobiFace training data significantly boosts the performance of deep learning-based trackers, suggesting that MobiFace captures the unique characteristics of mobile face tracking. Our goal is to offer the community a diverse dataset to enable the design and evaluation of mobile face trackers. The dataset, annotations and the evaluation server will be on \url{https://mobiface.github.io/}.Comment: To appear on The 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019

    Computer Vision-based Approach for Rejecting Portraits and Mannequins

    Get PDF
    In this innovation, we propose to detect Human faces that are stationary, and that way make sure that statues, mannequins, portraits containing people, and face false positives appearing on other static objects can be removed from the further processing through framing and tracking experiences. Training halls, seminar and board rooms, conference rooms, office huddle and personal workspaces often have statues, mannequins, or other human face like 3- dimensional physical figures or people in the portraits or personal photo collection that is coming in the field of view of camera. An ability to detect and remove from further processing of such static stationary human faces makes the framing and tracking experience much more stable for the end users; such framing and tracking is reactive only to real life human faces and not to stationary static faces found in statues, mannequins, and portraits

    RGBD Datasets: Past, Present and Future

    Full text link
    Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style
    • 

    corecore