5,943 research outputs found

    A novel facial expression recognition method using bi-dimensional EMD based edge detection

    Get PDF
    Facial expressions provide an important channel of nonverbal communication. Facial recognition techniques detect people’s emotions using their facial expressions and have found applications in technical fields such as Human-Computer-Interaction (HCI) and security monitoring. Technical applications generally require fast processing and decision making. Therefore, it is imperative to develop innovative recognition methods that can detect facial expressions effectively and efficiently. Traditionally, human facial expressions are recognized using standard images. Existing methods of recognition require subjective expertise and high computational costs. This thesis proposes a novel method for facial expression recognition using image edge detection based on Bi-dimensional Empirical Mode Decomposition (BEMD). In this research, a BEMD based edge detection algorithm was developed, a facial expression measurement metric was created, and an intensive database testing was conducted. The success rates of recognition suggest that the proposed method could be a potential alternative to traditional methods for human facial expression recognition with substantially lower computational costs. Furthermore, a possible blind-detection technique was proposed as a result of this research. Initial detection results suggest great potential of the proposed method for blind-detection that may lead to even more efficient techniques for facial expression recognition

    3-D facial expression representation using B-spline statistical shape model

    Get PDF
    Effective representation and recognition of human faces are essential in a number of applications including human-computer interaction (HCI), bio-metrics or video conferencing. This paper presents initial results obtained for a novel method of 3-D facial expressions representation based on the shape space vector of the statistical shape model. The statistical shape model is constructed based on the control points of the B-spline surfaces of the train-ing data set. The model fitting for the data is achieved by a modified iterative closest point (ICP) method with the surface deformations restricted to the es-timated shape space. The proposed method is fully automated and tested on the synthetic 3-D facial data with various facial expressions. Experimental results show that the proposed 3-D facial expression representation can be potentially used for practical applications

    Dimensional Affect and Expression in Natural and Mediated Interaction

    Full text link
    There is a perceived controversy as to whether the cognitive representation of affect is better modelled using a dimensional or categorical theory. This paper first suggests that these views are, in fact, compatible. The paper then discusses this theme and related issues in reference to a commonly stated application domain of research on human affect and expression: human computer interaction (HCI). The novel suggestion here is that a more realistic framing of studies of human affect in expression with reference to HCI and, particularly HCHI (Human-Computer-Human Interaction) entails some re-formulation of the approach to the basic phenomena themselves. This theme is illustrated with several examples from several recent research projects.Comment: Invited article presented at the 23rd Annual Meeting of the International Society for Psychophysics, Tokyo, Japan, 20-23 October, 2007, Proceedings of Fechner Day vol. 23 (2007

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    A survey on mouth modeling and analysis for Sign Language recognition

    Get PDF
    © 2015 IEEE.Around 70 million Deaf worldwide use Sign Languages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Nonmanuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this paper, we present the first survey on mouth non-manuals in ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. Since limited research has been conducted regarding automatic analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the challenges and potentials of automatic analysis of mouth motion in the context of ASLR
    • …
    corecore