27 research outputs found

    Emotion Synthesis in Virtual Environments

    No full text
    Keywords: MPEG-4 facial animation, facial expressions, emotion synthesis Abstract: Man-Machine Interaction (MMI) systems that utilize multimodal information about users ' current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. Interfaces with human faces expressing emotions may help users feel at home when interacting with a computer because they are accepted as the most expressive means for communicating and recognizing emotions. Thus, emotion synthesis can enhance the atmosphere of a virtual environment and communicate messages far more vividly than any textual or speech information. In this paper, we present an abstract means of description of facial expressions, by utilizing concepts included in the MPEG-4 standard to synthesize expressions using a reduced representation, suitable for networked and lightweight applications.

    Parameterized Facial Expression Synthesis for Videoconferencing Applications

    No full text
    In this paper we propose a method of creating intermediate facial expressions based on primary ones [1]. To achieve this goal we utilize both Facial Definition Parameters (FDPs) and Facial Animation Parameters (FAPs). We introduce a way for modeling the primary expressions using FAPs and we describe a rule-based technique for the synthesis of intermediate ones. Furthermore, a relation between FAPs and the activation parameter proposed in some classic psychological studies is established. In this way we try to get advantage of the extended work that have been done from psychologists and which covers much more expressions than the archetypal ones the computer society concentrated on. The overall scheme leads to a parameterized approach of synthesizing facial and can be used for the creation of MPEG-4 compatible synthetic video sequence

    An Intermediate Expressions ’ Generator System in the MPEG-4 Framework

    No full text
    Abstract. A lifelike human face can enhance interactive applications by providing straightforward feedback to and from the users and stimulating emotional responses from them. An expressive, realistic avatar should not “express himself ” in the narrow confines of the six archetypal expressions. In this paper, we present a system which generates intermediate expression profiles (set of FAPs) combining profiles of the six archetypal expressions, by utilizing concepts included in the MPEG-4 standard.

    Emotion representation for virtual environments

    No full text
    Book title: Proceedings of the 7th International Conference on TelecommunicationsResearch on networked applications that utilize multimodal information about their users' current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. Human faces may act as visual interfaces that help users feel at home when interacting with a computer because they are accepted as the most expressive means for communicating and recognizing emotions. Thus, a lifelike human face can enhance interactive applications by providing straightforward feedback to and from the users and stimulating emotional responses from them. Thus, virtual environments can employ believable, expressive characters since such features significantly enhance the atmosphere of a virtual world and Communicate messages far more vividly than any textual or speech information. In this paper, we present an abstract means of description of facial expressions, by utilizing concepts included in the MPEG-4 standard. Furthermore, we exploit these concepts to synthesize a wide variety of expressions using a reduced representation, suitable for networked and lightweight applications
    corecore