9 research outputs found

    A modular approach to facial feature segmentation on real sequences

    No full text
    In this paper a modular approach of gradual confidence for facial feature extraction over real video frames is presented. The problem is being dealt under general imaging conditions and soft presumptions. The proposed methodology copes with large variations in the appearance ofdiverse subjects, as well as ofthe same subject in various instances within real video sequences. Areas of the face that statistically seem to be outstanding form an initial set of regions that are likely to include information about the features of interest. Enhancement of these regions produces closed objects, which reveal—through the use of a fuzzy system—a dominant angle, i.e. the facial rotation angle. The object set is restricted using the dominant angle. An exhaustive search is performed among all candidate objects, matching a pattern that models the relative position ofthe eyes and the mouth. Labeling ofthe winner features can be used to evaluate the features extracted and provide feedback in an iterative framework. A subset of the MPEG-4 facial definition or facial animation parameter set can be obtained. This gradual feature revelation is performed under optimization for each step, producing a posteriori knowledge about the face and leading to a step-by-step visualization ofthe features in search

    Probabilistic Video-Based Gesture Recognition Using Self-Organizing Feature Maps

    No full text
    Abstract. Present work introduces a probabilistic recognition scheme for hand gestures. Self organizing feature maps are used to model spatiotemporal information extracted through image processing. Two models are built for each gesture category and, along with appropriate distance metrics, produce a validated classification mechanism that performs consistently during experiments on acted gestures video sequences

    HAND TRAJECTORY BASED GESTURE RECOGNITION USING SELF-ORGANIZING FEATURE MAPS AND MARKOV MODELS

    No full text
    This work presents the design and experimental verification of an original system architecture aiming at recognizing gestures based solely on the hand trajectory. Self organizing feature maps are used to model spatial information while Markov models encode the temporal aspect of hand position within a trajectory. A validated classification mechanism is produced through a set of models and a committee machine setup ensures robustness as indicated by the experimental results performed. Index Terms—gesture recognition, self organizing feature map, Markov processes 1

    Facial expression and gesture analysis for emotionally-rich man-machine interaction

    No full text
    This chapter presents a holistic approach to emotion modeling and analysis and their applications in Man-Machine Interaction applications. Beginning from a symbolic representation of human emotions found in this context, based on their expression via facial expressions and hand gestures, we show that it is possible to transform quantitative feature information from video sequences to an estimation of a user’s emotional state. While these features can be used for simple representation purposes, in our approach they are utilized to provide feedback on the users’ emotional state, hoping to provide next-generation interfaces that are able to recognize the emotional states of their users
    corecore