40,746 research outputs found

    Removing the own-race bias in face recognition by attentional shift using fixation crosses to diagnostic features: An eye-tracking study

    Get PDF
    Hills and Lewis (2011) have demonstrated that the own-race bias in face recognition can be reduced or even removed by guiding participants' attention and potentially eye movements to the most diagnostic visual features. Using the same old/new recognition paradigm as Hills and Lewis, we recorded Black and White participants' eye movements whilst viewing Black and White faces following fixation crosses that preceded the bridge of the nose (between the eyes) or the tip of the nose. White faces were more accurately recognized when following high fixation crosses (that preceded the bridge of the nose) than when following low fixation crosses. The converse was true for Black faces. These effects were independent of participant race. The fixation crosses attracted the first fixation but had less effect on other eye-tracking measures. Furthermore, the location of the first fixation was predictive of recognition accuracy. These results are consistent with an attentional allocation model of the own-race bias in face recognition and highlight the importance of the first fixation for face perception (cf. Hsiao & Cottrell, 2008)

    Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging

    Full text link
    Automatically monitoring and quantifying stress-induced thermal dynamic information in real-world settings is an extremely important but challenging problem. In this paper, we explore whether we can use mobile thermal imaging to measure the rich physiological cues of mental stress that can be deduced from a person's nose temperature. To answer this question we build i) a framework for monitoring nasal thermal variable patterns continuously and ii) a novel set of thermal variability metrics to capture a richness of the dynamic information. We evaluated our approach in a series of studies including laboratory-based psychosocial stress-induction tasks and real-world factory settings. We demonstrate our approach has the potential for assessing stress responses beyond controlled laboratory settings

    "'Who are you?' - Learning person specific classifiers from video"

    Get PDF
    We investigate the problem of automatically labelling faces of characters in TV or movie material with their names, using only weak supervision from automaticallyaligned subtitle and script text. Our previous work (Everingham et al. [8]) demonstrated promising results on the task, but the coverage of the method (proportion of video labelled) and generalization was limited by a restriction to frontal faces and nearest neighbour classification. In this paper we build on that method, extending the coverage greatly by the detection and recognition of characters in profile views. In addition, we make the following contributions: (i) seamless tracking, integration and recognition of profile and frontal detections, and (ii) a character specific multiple kernel classifier which is able to learn the features best able to discriminate between the characters. We report results on seven episodes of the TV series “Buffy the Vampire Slayer”, demonstrating significantly increased coverage and performance with respect to previous methods on this material

    Facial Feature Tracking and Occlusion Recovery in American Sign Language

    Full text link
    Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.National Science Foundation (IIS-0329009, IIS-0093367, IIS-9912573, EIA-0202067, EIA-9809340

    Using data visualization to deduce faces expressions

    Get PDF
    ConferĂŞncia Internacional, realizada na Turquia, de 6-8 de setembro de 2018.Collect and examine in real time multi modal sensor data of a human face, is an important problem in computer vision, with applications in medical and monitoring analysis, entertainment and security. Although its advances, there are still many open issues in terms of the identification of the facial expression. Different algorithms and approaches have been developed to find out patterns and characteristics that can help the automatic expression identification. One way to study data is through data visualizations. Data visualization turns numbers and letters into aesthetically pleasing visuals, making it easy to recognize patterns and find exceptions. In this article, we use information visualization as a tool to analyse data points and find out possible existing patterns in four different facial expressions.info:eu-repo/semantics/publishedVersio

    Towards Odor-Sensitive Mobile Robots

    Get PDF
    J. Monroy, J. Gonzalez-Jimenez, "Towards Odor-Sensitive Mobile Robots", Electronic Nose Technologies and Advances in Machine Olfaction, IGI Global, pp. 244--263, 2018, doi:10.4018/978-1-5225-3862-2.ch012 VersiĂłn preprint, con permiso del editorOut of all the components of a mobile robot, its sensorial system is undoubtedly among the most critical ones when operating in real environments. Until now, these sensorial systems mostly relied on range sensors (laser scanner, sonar, active triangulation) and cameras. While electronic noses have barely been employed, they can provide a complementary sensory information, vital for some applications, as with humans. This chapter analyzes the motivation of providing a robot with gas-sensing capabilities and also reviews some of the hurdles that are preventing smell from achieving the importance of other sensing modalities in robotics. The achievements made so far are reviewed to illustrate the current status on the three main fields within robotics olfaction: the classification of volatile substances, the spatial estimation of the gas dispersion from sparse measurements, and the localization of the gas source within a known environment
    • …
    corecore