18 research outputs found

    EmbodiMentor: a science fiction prototype to embody different perspectives using augmented reality

    Get PDF
    Conferência realizada na UTAD, Vila Real, de 1-3 de dezembro de 2016This paper describes the EmbodiMentor, an interaction concept and metaphor that aims to enable users to embody a different person or character’s perspective, specify or modify his/her/its emotional elements and conditioning elements, and experience the resulting changes. Its use case scenario is the education and training of foreign languages and intercultural communication skills, were contextualization and first person experiences in common settings are key for practical skill acquisitions. It was born as the micro-science-fiction prototype “Frances can’t sleep. She crawls out of bed and with her EmbodiMentor runs through a range of a client’s emotional states, pitching to each one. She then falls asleep.” The application of the science fiction prototyping concept has been proven a strong approach to develop and investigate innovative applications of emerging technologies.info:eu-repo/semantics/publishedVersio

    Quantification of vascular function changes under different emotion states: A pilot study

    Get PDF
    Recent studies have indicated that physiological parameters change with different emotion states. This study aimed to quantify the changes of vascular function at different emotion and sub-emotion states. Twenty young subjects were studied with their finger photoplethysmographic (PPG) pulses recorded at three distinct emotion states: natural (1 minute), happiness and sadness (10 minutes for each). Within the period of happiness and sadness emotion states, two sub-emotion states (calmness and outburst) were identified with the synchronously recorded videos. Reflection index (RI) and stiffness index (SI), two widely used indices of vascular function, were derived from the PPG pulses to quantify their differences between three emotion states, as well as between two sub-emotion states. The results showed that, when compared with the natural emotion, RI and SI decreased in both happiness and sadness emotions. The decreases in RI were significant for both happiness and sadness emotions (both P< 0.01), but the decreases in SI was only significant for sadness emotion (P< 0.01). Moreover, for comparing happiness and sadness emotions, there was significant difference in RI (P< 0.01), but not in SI (P= 0.9). In addition, significant larger RI values were observed with the outburst sub-emotion in comparison with the calmness one for both happiness and sadness emotions (both P< 0.01) whereas significant larger SI values were observed with the outburst sub-emotion only in sadness emotion (P< 0.05). Moreover, gender factor hardly influence the RI and SI results for all three emotion measurements. This pilot study confirmed that vascular function changes with diffenrt emotion states could be quantified by the simple PPG measurement

    Extended LBP based Facial Expression Recognition System for Adaptive AI Agent Behaviour

    Get PDF
    Automatic facial expression recognition is widely used for various applications such as health care, surveillance and human-robot interaction. In this paper, we present a novel system which employs automatic facial emotion recognition technique for adaptive AI agent behaviour. The proposed system is equipped with kirsch operator based local binary patterns for feature extraction and diverse classifiers for emotion recognition. First, we nominate a novel variant of the local binary pattern (LBP) for feature extraction to deal with illumination changes, scaling and rotation variations. The features extracted are then used as input to the classifier for recognizing seven emotions. The detected emotion is then used to enhance the behaviour selection of the artificial intelligence (AI) agents in a shooter game. The proposed system is evaluated with multiple facial expression datasets and outperformed other state-of-the-art models by a significant margin

    Facial Landmark Based Region of Interest Localization for Deep Facial Expression Recognition

    Get PDF
    Automated facial expression recognition has gained much attention in the last years due to growing application areas such as computer animated agents, sociable robots and human computer interaction. The realization of a reliable facial expression recognition system through machine learning is still a challenging task particularly on databases with large number of images. Convolutional Neural Network (CNN) architectures have been proposed to deal with large numbers of training data for better accuracy. For CNNs, a task related best achieving architectural structure does not exist. In addition, the representation of the input image is equivalently important as the architectural structure and the training data. Therefore, this study focuses on the performances of various CNN architectures trained by different region of interests of the same input data. Experiments are performed on three distinct CNN architectures with three different crops of the same dataset. Results show that by appropriately localizing the facial region and selecting the correct CNN architecture it is possible to boost the recognition rate from 84% to 98% while decreasing the training time for proposed CNN architectures

    Real Time Talking System for Virtual Human based on ProPhone

    Get PDF
    Lip-syncing is a process of speech assimilation with the lip motions of a virtual character. A virtual talking character is a challenging task because it should provide control on all articulatory movements and must be synchronized with the speech signal. This study presents a virtual talking character system aimed to speeding and easing the visual talking process as compared to the previous techniques using the blend shapes approach. This system constructs the lip-syncing using a set of visemes for reduced phonemes set by a new method named Prophone. This Prophone depend on the probability of appearing the phoneme in the sentence of English Language. The contribution of this study is to develop real-time automatic talking system for English language based on the concatenation of the visemes, followed by presenting the results that was evaluated by the phoneme to viseme table using the Prophone

    A Multi-Population FA for Automatic Facial Emotion Recognition

    Get PDF
    Automatic facial emotion recognition system is popular in various domains such as health care, surveillance and human-robot interaction. In this paper we present a novel multi-population FA for automatic facial emotion recognition. The overall system is equipped with horizontal vertical neighborhood local binary patterns (hvnLBP) for feature extraction, a novel multi-population FA for feature selection and diverse classifiers for emotion recognition. First, we extract features using hvnLBP, which are robust to illumination changes, scaling and rotation variations. Then, a novel FA variant is proposed to further select most important and emotion specific features. These selected features are used as input to the classifier to further classify seven basic emotions. The proposed system is evaluated with multiple facial expression datasets and also compared with other state-of-the-art models

    Extended LBP based Facial Expression Recognition System for Adaptive AI Agent Behaviour

    Get PDF
    This publication may contain explicit sexual literary descriptions and/or artistic depictions
    corecore