4 research outputs found

    Effects of avatar character performances in virtual reality dramas used for teachers’ education

    Get PDF
    Virtual reality drama has the benefit of enhancing immersion, which was lacking in original e-Learning systems. Moreover, dangerous and expensive educational content can be replaced by stimulating users\u2019 interest. In this study, we investigate the effects of avatar performance in virtual reality drama. The hypothesis that the psychical distance between virtual characters and their viewers changes according to the size of video shots is tested with an autonomic nervous system function test. Eighty-four college students were randomly assigned to three groups. Virtual reality drama is used to train teachers concerning school bullying prevention, and deals with the dialogue between teachers and students. Group 1 was provided with full-shot video clips, Group 2 was shown various clips from full shots to extreme close-ups, and Group 3 was provided with close-up shots. We found that the virtual reality drama viewers\u2019 levels of stimulation changed in relation to the size of the shots. The R-R (between P wave and P wave) intervals of the electrocardiograms (ECGs, bio-signal feedback) became significantly narrower as the shot size became smaller

    Towards a full emotional system

    No full text
    International audienceThe present paper proposes a system both able to classify a facial expression in one of the six categories namely: (Joy, Disgust, Anger, Sadness, Fear and Surprise) and to assign to each expression its intensity in the range: (High, Medium and Low). This is carried out in two independent and parallel processes. Permanent and transient facial features are detected from still images and pertinent information, about the presence of transient features on specific facial regions and about facial distances computed from permanent facial features, is extracted. Both classification and quantification processes are based on transient and permanent features. The belief theory is used with the two processes because of its ability in fusing data coming from different sensors. The system outputs a recognized and quantified expression. The quantification process allows recognizing a new subset of expressions deduced from the basic ones. Indeed, by associating to each expression three intensities low, medium and high, we deduce three facial expressions. Finally a set of eighteen facial expressions is categorized instead of the six ones. Experimental results are given to show the system classification accuracy of the system
    corecore