8 research outputs found

    Towards emotional interaction: using movies to automatically learn users’ emotional states

    Get PDF
    The HCI community is actively seeking novel methodologies to gain insight into the user's experience during interaction with both the application and the content. We propose an emotional recognition engine capable of automatically recognizing a set of human emotional states using psychophysiological measures of the autonomous nervous system, including galvanic skin response, respiration, and heart rate. A novel pattern recognition system, based on discriminant analysis and support vector machine classifiers is trained using movies' scenes selected to induce emotions ranging from the positive to the negative valence dimension, including happiness, anger, disgust, sadness, and fear. In this paper we introduce an emotion recognition system and evaluate its accuracy by presenting the results of an experiment conducted with three physiologic sensors.info:eu-repo/semantics/publishedVersio

    Nonverbal Communication and the Influence of Film Success: A Literature Review

    Get PDF
    This review of literature focuses on the use of various nonverbal channels in film and explain how nonverbal communication influences the success (critical or commercial) of films. The different nonverbal channels, or cues, explored are environment, physical characteristics, gestures, and touch. Within each of these channels, subtopics are examined including color, sound, physical attractiveness, costume design, and more. Rather than a conducting a study testing respondents on any physiological reactions to films, this is an extensive literature review supporting the claim that nonverbal cues do in fact influence the success of films, specifically, critical success. While each channel could also be described as “visual cues,” they each fall under the general discipline of nonverbal communication and thus, are referred to as exclusively nonverbal “cues” or “channels.” Influence is directly related to persuasion, and for a film to be successful, audiences must be engaged. This engagement leads moviegoers to rate the film favorably, resulting in more people spending money to view the film (commercial success) and/or writing reviews praising the film’s efforts (critical success)

    Effect of horror clips on the physiology of ans & heart

    Get PDF
    The current study deals with the ECG and, HRV parameter analysis to study the physiology of ANS and, the heart by taking data from 20 volunteers under the effect of horror clips. The volunteers had their ECG reading recorded under normal and, horror situations by showing them the same video clips and, at relatively the same time of the day(after dinner hours) to keep uniformity in the readings and, reduce any ambiguity that might be present. Our results showed us that the effect of HRV parameters on the physiology of heart was significant. The time series data and, the time series data using wavelet (db-6 wavelet) didn’t affect the ECG readings of horror much more than that of normal readings of the same subjects. This was also verified by t-test grouping. The results clearly show that HRV parameters affect the horror ECG readings

    Sharing Video Emotional Information in the Web

    Get PDF
    Video growth over the Internet changed the way users search, browse and view video content. Watching movies over the Internet is increasing and becoming a pastime. The possibility of streaming Internet content to TV, advances in video compression techniques and video streaming have turned this recent modality of watching movies easy and doable. Web portals as a worldwide mean of multimedia data access need to have their contents properly classified in order to meet users’ needs and expectations. The authors propose a set of semantic descriptors based on both user physiological signals, captured while watching videos, and on video low-level features extraction. These XML based descriptors contribute to the creation of automatic affective meta-information that will not only enhance a web-based video recommendation system based in emotional information, but also enhance search and retrieval of videos affective content from both users’ personal classifications and content classifications in the context of a web portal.info:eu-repo/semantics/publishedVersio

    A physiological examination of perceived incorporation during trance [version 2; referees: 2 approved]

    Get PDF
    Background: Numerous world cultures believe channeling provides genuine information, and channeling rituals in various forms are regularly conducted in both religious and non-religious contexts. Little is known about the physiological correlates of the subjective experience of channeling. Methods: We conducted a prospective within-subject design study with 13 healthy adult trance channels. Participants alternated between 5-minute blocks of channeling and no-channeling three times while electroencephalography (EEG), electrocardiography (ECG), galvanic skin response (GSR), and respiration were collected on two separate days. Voice recordings of the same story read in channeling and no-channeling states were also analyzed.   Results: The pre-laboratory survey data about demographics, perception of the source, purpose and utility of channeled information reflected previous reports. Most participants were aware of their experience (rather than in a full trance) and had varying levels of perceived incorporation (i.e. control of their body). Voice analysis showed an increase in voice arousal and power (dB/Hz) differences in the 125 Hz bins between 0 and 625 Hz, and 3625 and 3875 Hz when reading during the channeling state versus control. Despite subjective perceptions of distinctly different states, no substantive differences were seen in EEG frequency power, ECG measures, GSR and respiration. Conclusions: Voice parameters were different between channeling and no-channeling states using rigorous controlled methods, but other physiology measure collected were not. Considering the subjective and phenomenological differences observed, future studies should include other measures such as EEG connectivity analyses, fMRI and biomarkers

    Caractérisation du niveau d’amusement grâce à des techniques d’apprentissage machine

    Full text link
    Introduction. L'humour est un processus cognitif complexe qui peut entraîner un état émotionnel positif d’amusement. La réponse émotionnelle déclenchée par l'humour possède plusieurs bénéfices pour la santé. Son utilisation en recherche et lors d’essais cliniques est d’ailleurs de plus en plus fréquente. Malheureusement, l’appréciation de l’humour varie considérablement d’un individu à l’autre, et entraîne des réponses émotionnelles très différentes. Cette variabilité, rarement prise en compte dans les études de recherche, est donc importante à quantifier pour pouvoir évaluer de manière robuste les effets de l’humour sur la santé. Objectifs. Ce projet de maîtrise vise à explorer différentes modalités permettant d’établir une mesure objective de l'appréciation de l'humour via des techniques d'apprentissage automatique et d'apprentissage profond. Les caractéristiques de la vidéo, les expressions faciales et l'activité cérébrale ont été testées comme prédicteur potentiels de l’intensité de l'amusement. Étude 1. Dans notre première étude, les participants (n = 40) ont regardé et évalué des vidéos humoristiques et neutres pendant que leurs expressions faciales étaient enregistrées. Pour chaque vidéo, nous avons calculé le mouvement moyen, la saillance et deux scores sémantiques. L’algorithme d’arbres aléatoire a été entraîné sur les caractéristiques des vidéos et le sourire des participants afin de prédire à quel point le participant a évalué la vidéo comme étant drôle, et ce, à trois moments durant la vidéo (début, milieu et fin). De plus, nous avons utilisé l'expression faciale du participant pour explorer la dynamique temporelle de l'appréciation de l'humour tout au long de la vidéo et ses impacts sur la vidéo suivante. Nos résultats ont montré que les caractéristiques des vidéos permettent de bien classifier les vidéos neutres et les vidéos humoristiques, mais ne permettent pas de différencier les intensités d'humour. À l’inverse, le sourire est un bon prédicteur de l’intensité de l’amusement au sein des vidéos humoristiques (contribution=0.53) et est la seule modalité à fluctuer dans le temps; montrant ainsi que l'appréciation de l'humour est plus grande à la fin de la vidéo et après la vidéo. Étude 2. Notre deuxième étude a utilisé des techniques d'apprentissage profond afin de prédire l’intensité de l’amusement ressenti par les participants (n = 10) lorsqu’ils visionnaient des vidéos humoristiques avec un casque EEG commercial. Nous avons utilisé un algorithme LSTM pour prédire les intensités d'amusement vi (faible, modéré, élevé, très élevé) en fonction d'une seconde d'activité cérébrale. Les résultats ont montré une bonne transférabilité entre les participants et une précision de décodage dépassant 80% d’exactitude. Conclusion. Les caractéristiques de la vidéo, les expressions faciales des participants et l'activité cérébrale ont permis de prédire l'appréciation de l'humour. À partir de ces trois modalités, nous avons trouvé que les réactions physiologiques (expression faciale et activité cérébrale) prédisent mieux les intensités de l’amusement tout en offrant une meilleure précision temporelle de la dynamique d'appréciation de l'humour. Les futures études employant l'humour gagneraient à inclure le niveau d’appréciation, mesuré via le sourire ou l’activité cérébrale, comme variable d’intérêt dans leurs protocoles expérimentaux.Introduction. Humour is a complex cognitive process that can result in a positive emotional state of amusement. The emotional response triggered by humour has several health benefits and is used in many research and clinical trials as treatments. Humour appreciation varies greatly between participants and can trigger different levels of emotional response. Unfortunately, research rarely considers these individual differences, which could impact the implication of humour in research. These researches would benefit from having an objective method to detect humour appreciation. Objectives. This master's thesis seeks to provide an appropriate solution for an objective measure of humour appreciation by using machine learning and deep learning techniques to predict how individuals react to humorous videos. Video characteristics, facial expressions and brain activity were tested as potential predictors of amusement’s intensity. Study 1. In our first study, participants (n=40) watched and rated humorous and neutral videos while their facial expressions were recorded. For each video, we computed the average movement, saliency and semantics associated with the video. Random Forest Classifier was used to predict how funny the participant rated the video at three moments during the clip (begging, middle, end) based on the video's characteristics and the smiles of the participant. Furthermore, we used the participant's facial expression to explore the temporal dynamics of humour appreciation throughout the video and its impacts on the following video. Our results showed that video characteristics are better to classify between neutral and humorous videos but cannot differentiate humour intensities. On the other hand, smiling was better to determine how funny the humorous videos were rated. The proportion of smiles also had more significant fluctuations in time, showing that humour appreciation is greater at the end of the video and the moment just after. Study 2. Our second study used deep learning techniques to predict how funny participants (n=10) rated humorous videos with a commercial EEG headset. We used an LSTM algorithm to predict the intensities of amusement (low, medium, high, very high) based on one second of brain activity. Results showed good transferability across participants, and decoding accuracy reached over 80%. Conclusion. Video characteristics, participant's facial expressions and brain activity allowed us to predict humour appreciation. From these three, we found that physiological reactions (facial expression and brain activity) better predict funniness intensities while also offering a better temporal precision as to when humour appreciation occurs. Further studies using humour would benefit from adding physiological responses as a variable of interest in their experimental protocol

    Affective characterization of movie scenes based on content analysis and physiological changes

    No full text
    In this paper, we propose an approach for affective characterization of movie scenes based on the emotions that are actually felt by spectators. Such a representation can be used to characterize the emotional content of video clips in application areas such as affective video indexing and retrieval, and neuromarketing studies. A dataset of 64 different scenes from eight movies was shown to eight participants. While watching these scenes, their physiological responses were recorded. The participants were asked to self-assess their felt emotional arousal and valence for each scene. In addition, content-based audioand video-based features were extracted from the movie scenes in order to characterize each scene. Degrees of arousal and valence were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based features. We showed that a significant correlation exists between valence-arousal provided by the spectator’s self-assessments, and affective grades obtained automatically from either physiological responses or from audio-video features. By means of an analysis of variance (ANOVA), the variation of different participants ’ self assessments and different gender groups self assessments for both valence and arousal were shown to be significant (p-values lower than 0.005). These affective characterization results demonstrate the ability of using multimedia features and physiological responses to predict the expected affect of the user in response to the emotional video content

    Multimedia interaction and access based on emotions:automating video elicited emotions recognition and visualization

    Get PDF
    Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2013Films are an excellent form of art that exploit our affective, perceptual and intellectual abilities. Technological developments and the trends for media convergence are turning video into a dominant and pervasive medium, and online video is becoming a growing entertainment activity on the web. Alongside, physiological measures are making it possible to study additional ways to identify and use emotions in human-machine interactions, multimedia retrieval and information visualization. The work described in this thesis has two main objectives: to develop an Emotions Recognition and Classification mechanism for video induced emotions; and to enable Emotional Movie Access and Exploration. Regarding the first objective, we explore recognition and classification mechanisms, in order to allow video classification based on emotions, and to identify each user’s emotional states providing different access mechanisms. We aim to provide video classification and indexing based on emotions, felt by the users while watching movies. In what concerns the second objective, we focus on emotional movie access and exploration mechanisms to find ways to access and visualize videos based on their emotional properties and users’ emotions and profiles. In this context, we designed a set of methods to access and watch the movies, both at the level of the whole movie collection, and at the individual movies level. The automatic recognition mechanism developed in this work allows for the detection of physiologic patterns, indeed providing valid individual information about users emotion while they were watching a specific movie; in addition, the user interface representations and exploration mechanisms proposed and evaluated in this thesis, show that more perceptive, satisfactory and useful visual representations influenced positively the exploration of emotional information in movies.Fundação para a Ciência e a Tecnologia (FCT, PROTEC SFRH/BD/49475/2009, LASIGE Multiannual Funding e VIRUS projecto (PTDC/EIAEIA/101012/2008
    corecore