322 research outputs found

    What does touch tell us about emotions in touchscreen-based gameplay?

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence

    On Passion and Sports Fans:A Look at Football

    Get PDF
    The purpose of the present research was to test the applicability of the Dualistic Model of Passion (Vallerand et al., 2003) to being a sport (football) fan. The model posits that passion is a strong inclination toward an activity that individuals like (or even love), that they value, and in which they invest time and energy. Furthermore, two types of passion are proposed: harmonious and obsessive passion. While obsessive passion entails an uncontrollable urge to engage in the passionate activity, harmonious passion entails a sense of volition while engaging in the activity. Finally, the model posits that harmonious passion leads to more adaptive outcomes than obsessive passion. Three studies provided support for this dualistic conceptualization of passion. Study 1 showed that harmonious passion was positively associated with adaptive behaviours (e.g., celebrate the team’s victory), while obsessive passion was rather positively associated with maladaptive behaviours (e.g., to risk losing one’s employment to go to the team’s game). Study 2 used a short Passion Scale and showed that harmonious passion was positively related to the positive affective life of fans during the 2006 FIFA World Cup, psychological health (self-esteem and life satisfaction), and public displays of adaptive behaviours (e.g., celebrating one’s team victory in the streets), while obsessive passion was predictive of maladaptive affective life (e.g., hating opposing team’s fans) and behaviours (e.g., mocking the opposing team’s fans). Finally, Study 3 examined the role of obsessive passion as a predictor of partner’s conflict that in turn undermined partner’s relationship satisfaction. Overall, the present results provided support for the Dualistic Model of Passion. The conceptual and applied implications of the findings are discussed

    Music Alters Visual Perception

    Get PDF
    Background: Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e.g., memory) and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the perception of emotional stimuli is influenced by the emotional state of the observer. In other words, how we perceive the world does not only depend on what we know of the world, but also by how we feel. In this study, we further investigated the relation between mood and perception. Methods and Findings: We let observers do a difficult stimulus detection task, in which they had to detect schematic happy and sad faces embedded in noise. Mood was manipulated by means of music. We found that observers were more accurate in detecting faces congruent with their mood, corroborating earlier research. However, in trials in which no actual face was presented, observers made a significant number of false alarms. The content of these false alarms, or illusory percepts, was strongly influenced by the observers ’ mood. Conclusions: As illusory percepts are believed to reflect the content of internal representations that are employed by the brain during top-down processing of visual input, we conclude that top-down modulation of visual processing is not purely predictive in nature: mood, in this case manipulated by music, may also directly alter the way we perceive the world

    The Lsm2-8 complex determines nuclear localization of the spliceosomal U6 snRNA

    Get PDF
    Lsm proteins are ubiquitous, multifunctional proteins that are involved in the processing and/or turnover of many, if not all, RNAs in eukaryotes. They generally interact only transiently with their substrate RNAs, in keeping with their likely roles as RNA chaperones. The spliceosomal U6 snRNA is an exception, being stably associated with the Lsm2-8 complex. The U6 snRNA is generally considered to be intrinsically nuclear but the mechanism of its nuclear retention has not been demonstrated, although La protein has been implicated. We show here that the complete Lsm2-8 complex is required for nuclear accumulation of U6 snRNA in yeast. Therefore, just as Sm proteins effect nuclear localization of the other spliceosomal snRNPs, the Lsm proteins mediate U6 snRNP localization except that nuclear retention is the likely mechanism for the U6 snRNP. La protein, which binds only transiently to the nascent U6 transcript, has a smaller, apparently indirect, effect on U6 localization that is compatible with its proposed role as a chaperone in facilitating U6 snRNP assembly

    The power of pictures: Vertical picture angles in power pictures

    Get PDF
    Abstract: Conventional wisdom suggests that variations in vertical picture angle cause the subject to appear more powerful when depicted from below and less powerful when depicted from above. However, do the media actually use such associations to represent individual differences in power? We argue that the diverse perspectives of evolutionary, social learning, and embodiment theories all suggest that the association between verticality and power is relatively automatic and should, therefore, be visible in the portrayal of powerful and powerless individuals in the media. Four archival studies (with six samples) provide empirical evidence for this hypothesis and indicate that a salience power context reinforces this effect. In addition, two experimental studies confirm these effects for individuals producing media content. We discuss potential implications of this effect

    Laugh Like You Mean It:Authenticity Modulates Acoustic, Physiological and Perceptual Properties of Laughter

    Get PDF
    Several authors have recently presented evidence for perceptual and neural distinctions between genuine and acted expressions of emotion. Here, we describe how differences in authenticity affect the acoustic and perceptual properties of laughter. In an acoustic analysis, we contrasted spontaneous, authentic laughter with volitional, fake laughter, finding that spontaneous laughter was higher in pitch, longer in duration, and had different spectral characteristics from volitional laughter that was produced under full voluntary control. In a behavioral experiment, listeners perceived spontaneous and volitional laughter as distinct in arousal, valence, and authenticity. Multiple regression analyses further revealed that acoustic measures could significantly predict these affective and authenticity judgements, with the notable exception of authenticity ratings for spontaneous laughter. The combination of acoustic predictors differed according to the laughter type, where volitional laughter ratings were uniquely predicted by harmonics-to-noise ratio (HNR). To better understand the role of HNR in terms of the physiological effects on vocal tract configuration as a function of authenticity during laughter production, we ran an additional experiment in which phonetically trained listeners rated each laugh for breathiness, nasality, and mouth opening. Volitional laughter was found to be significantly more nasal than spontaneous laughter, and the item-wise physiological ratings also significantly predicted affective judgements obtained in the first experiment. Our findings suggest that as an alternative to traditional acoustic measures, ratings of phonatory and articulatory features can be useful descriptors of the acoustic qualities of nonverbal emotional vocalizations, and of their perceptual implications

    On the reciprocal interaction between believing and feeling: an adaptive agent modelling perspective

    Get PDF
    An agent’s beliefs usually depend on informational or cognitive factors such as observation or received communication or reasoning, but also affective factors may play a role. In this paper, by adopting neurological theories on the role of emotions and feelings, an agent model is introduced incorporating the interaction between cognitive and affective factors in believing. The model describes how the strength of a belief may not only depend on information obtained, but also on the emotional responses on the belief. For feeling emotions a recursive body loop between preparations for emotional responses and feelings is assumed. The model introduces a second feedback loop for the interaction between feeling and belief. The strength of a belief and of the feeling both result from the converging dynamic pattern modelled by the combination of the two loops. For some specific cases it is described, for example, how for certain personal characteristics an optimistic world view is generated in the agent’s beliefs, or, for other characteristics, a pessimistic world view. Moreover, the paper shows how such affective effects on beliefs can emerge and become stronger over time due to experiences obtained. It is shown how based on Hebbian learning a connection from feeling to belief can develop. As these connections affect the strenghts of future beliefs, in this way an effect of judgment ‘by experience built up in the past’ or ‘by gut feeling’ can be obtained. Some example simulation results and a mathematical analysis of the equilibria are presented

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions
    corecore