20,776 research outputs found

    Plug-in to fear: game biosensors and negative physiological responses to music

    Get PDF
    The games industry is beginning to embark on an ambitious journey into the world of biometric gaming in search of more exciting and immersive gaming experiences. Whether or not biometric game technologies hold the key to unlock the “ultimate gaming experience” hinges not only on technological advancements alone but also on the game industry’s understanding of physiological responses to stimuli of different kinds, and its ability to interpret physiological data in terms of indicative meaning. With reference to horror genre games and music in particular, this article reviews some of the scientific literature relating to specific physiological responses induced by “fearful” or “unpleasant” musical stimuli, and considers some of the challenges facing the games industry in its quest for the ultimate “plugged-in” experience

    Predicting continuous conflict perception with Bayesian Gaussian processes

    Get PDF
    Conflict is one of the most important phenomena of social life, but it is still largely neglected by the computing community. This work proposes an approach that detects common conversational social signals (loudness, overlapping speech, etc.) and predicts the conflict level perceived by human observers in continuous, non-categorical terms. The proposed regression approach is fully Bayesian and it adopts Automatic Relevance Determination to identify the social signals that influence most the outcome of the prediction. The experiments are performed over the SSPNet Conflict Corpus, a publicly available collection of 1430 clips extracted from televised political debates (roughly 12 hours of material for 138 subjects in total). The results show that it is possible to achieve a correlation close to 0.8 between actual and predicted conflict perception

    How to Do Things Without Words: Infants, utterance-activity and distributed cognition

    Get PDF
    Clark and Chalmers (1998) defend the hypothesis of an ‘Extended Mind’, maintaining that beliefs and other paradigmatic mental states can be implemented outside the central nervous system or body. Aspects of the problem of ‘language acquisition’ are considered in the light of the extended mind hypothesis. Rather than ‘language’ as typically understood, the object of study is something called ‘utterance-activity’, a term of art intended to refer to the full range of kinetic and prosodic features of the on-line behaviour of interacting humans. It is argued that utterance activity is plausibly regarded as jointly controlled by the embodied activity of interacting people, and that it contributes to the control of their behaviour. By means of specific examples it is suggested that this complex joint control facilitates easier learning of at least some features of language. This in turn suggests a striking form of the extended mind, in which infants’ cognitive powers are augmented by those of the people with whom they interact

    Evaluation of product sound design within the context of emotion design and emotional branding

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Industrial Design, Izmir, 2005Includes bibliographical references (leaves: 111-122)Text in English; Abstract: Turkish and Englishxi, 127 leavesThe main purpose of this thesis is to set out the relationships between the work of product designers and the perceptions of costumers regarding the acceptability of product sounds. Product design that provides aesthetic appeal, pleasure and satisfaction can greatly influence success of a product. Sound as a cognitive artifact, plays a significant role in the cognition of product interaction and in shaping its identity. This thesis will review emotion theories end their application to sound design and sound quality modeling, the measurement of emotional responses to sound, and the relationship between psycho-acoustical sound descriptions and emotions. In addition to that, affects of sounds to emotionally significant brands will be evaluated so as to examine marketing values. One of the main purposes of chapter 2 is to prove knowledge about psychoacoustics; as product sound quality is a basic understanding of the underlying psychoacoustics phenomena. Perception; particularly sound perception and its elements are described during chapter 2. Starting with the description of sound wave and how our hear works, sound perception and auditory sensation is reviewed in continuation. In chapter 3, product sound quality concept and its evaluation principles are reviewed. Thus, in order to understand the coupling between the acoustic perception and the product design; knowledge of general principles for product sound quality are required. Chapter 4 can be considered as two main sections. .How does emotion act as a delighter in product design?. is examined to better understand customer and user experiences impacting pleasure-ability in first section. In the second section, emotion is evaluated through sound design. A qualitative evaluation is done so as to examine cognition and emotion in sound perception. Chapter 5 leads subject through emotional branding. Sounds that carry the brand.s identity are evaluated within. Sound design is re-evaluated as marketing strategy and examined with several instances. Keywords: Product sound design, psychoacoustics, product sound quality, emotion design, emotional branding

    Voice and speech perception in autism : a systematic review

    Get PDF
    Autism spectrum disorders (ASD) are characterized by persistent impairments in social communication and interaction, restricted and repetitive behavior. In the original description of autism by Kanner (1943) the presence of emotional impairments was already emphasized (self-absorbed, emotionally cold, distanced, and retracted). However, little research has been conducted focusing on auditory perception of vocal emotional cues, being the audio-visual comprehension most commonly explored instead. Similarly to faces, voices play an important role in social interaction contexts in which individuals with ASD show impairments. The aim of the current systematic review was to integrate evidence from behavioral and neurobiological studies for a more comprehensive understanding of voice processing abnormalities in ASD. Among different types of information that the human voice may provide, we hypothesize particular deficits with vocal affect information processing by individuals with ASD. The relationship between vocal stimuli impairments and disrupted Theory of Mind in Autism is discussed. Moreover, because ASD are characterized by deficits in social reciprocity, further discussion of the abnormal oxytocin system in individuals with ASD is performed as a possible biological marker for abnormal vocal affect information processing and social interaction skills in ASD population

    Examining Relationships Between Basic Emotion Perception and Musical Training in the Prosodic, Facial, and Lexical Channels of Communication and in Music

    Full text link
    Research has suggested that intensive musical training may result in transfer effects from musical to non-musical domains. There is considerable research on perceptual and cognitive transfer effects associated with music, but, comparatively, fewer studies examined relationships between musical training and emotion processing. Preliminary findings, though equivocal, suggested that musical training is associated with enhanced perception of emotional prosody, consistent with a growing body of research demonstrating relationships between music and speech. In addition, few studies directly examined the relationship between musical training and the perception of emotions expressed in music, and no studies directly evaluated this relationship in the facial and lexical channels of emotion communication. In an effort to expand on prior findings, the current study characterized emotion perception differences between musicians and non-musicians in the prosodic, lexical, and facial channels of communication and in music. A total of 119 healthy adults (18-40 years old) completed the study. Fifty-eight were musicians and 61 were controls. Participants were screened for neurological and psychiatric illness. They completed emotion perception tasks from the New York Emotion Battery (Borod, Welkowitz, & Obler, 1992) and a music emotion perception task, created for this project, using stimuli developed by Eerola and Vuoskoski (2011). They also completed multiple non-emotional control measures, as well as neuropsychological and self-report measures, in order to control for any relevant participant group differences. Parametric and non-parametric statistical procedures were employed to evaluate for group differences in emotion perception accuracy for each of the emotional control tasks. Parametric and non-parametric procedures were also used to evaluate whether musicians and non-musicians differed with regard to their perception of basic emotions. There was evidence for differences in emotion perception between musicians and non- musicians. Musicians were more accurate than non-musicians for the prosodic channel and for musical emotions. There were no group differences for the lexical or facial channels of emotion communication. When error patterns were examined, musicians and non-musicians were found to make similar patterns of misidentifications, suggesting that musicians and non-musicians were processing emotions similarly. Results are discussed in the context of theories of music and speech, emotion perception processing, and learning transfer. This work serves to clarify and strengthen prior research demonstrating relationships between music and speech. It also has implications for understanding emotion perception as well as potential clinical implications, particularly for neurorehabilitation. Lastly, this work serves to guide future research on music and emotion processing

    Cracking the social code of speech prosody using reverse correlation

    Get PDF
    Human listeners excel at forming high-level social representations about each other, even from the briefest of utterances. In particular, pitch is widely recognized as the auditory dimension that conveys most of the information about a speaker's traits, emotional states, and attitudes. While past research has primarily looked at the influence of mean pitch, almost nothing is known about how intonation patterns, i.e., finely tuned pitch trajectories around the mean, may determine social judgments in speech. Here, we introduce an experimental paradigm that combines state-of-the-art voice transformation algorithms with psychophysical reverse correlation and show that two of the most important dimensions of social judgments, a speaker's perceived dominance and trustworthiness, are driven by robust and distinguishing pitch trajectories in short utterances like the word "Hello," which remained remarkably stable whether male or female listeners judged male or female speakers. These findings reveal a unique communicative adaptation that enables listeners to infer social traits regardless of speakers' physical characteristics, such as sex and mean pitch. By characterizing how any given individual's mental representations may differ from this generic code, the method introduced here opens avenues to explore dysprosody and social-cognitive deficits in disorders like autism spectrum and schizophrenia. In addition, once derived experimentally, these prototypes can be applied to novel utterances, thus providing a principled way to modulate personality impressions in arbitrary speech signals

    Higher Right Hemisphere Gamma Band Lateralization and Suggestion of a Sensitive Period for Vocal Auditory Emotional Stimuli Recognition in Unilateral Cochlear Implant Children: An EEG Study

    Get PDF
    In deaf children, huge emphasis was given to language; however, emotional cues decoding and production appear of pivotal importance for communication capabilities. Concerning neurophysiological correlates of emotional processing, the gamma band activity appears a useful tool adopted for emotion classification and related to the conscious elaboration of emotions. Starting from these considerations, the following items have been investigated: (i) whether emotional auditory stimuli processing differs between normal-hearing (NH) children and children using a cochlear implant (CI), given the non-physiological development of the auditory system in the latter group; (ii) whether the age at CI surgery influences emotion recognition capabilities; and (iii) in light of the right hemisphere hypothesis for emotional processing, whether the CI side influences the processing of emotional cues in unilateral CI (UCI) children. To answer these matters, 9 UCI (9.47 ± 2.33 years old) and 10 NH (10.95 ± 2.11 years old) children were asked to recognize nonverbal vocalizations belonging to three emotional states: positive (achievement, amusement, contentment, relief), negative (anger, disgust, fear, sadness), and neutral (neutral, surprise). Results showed better performances in NH than UCI children in emotional states recognition. The UCI group showed increased gamma activity lateralization index (LI) (relative higher right hemisphere activity) in comparison to the NH group in response to emotional auditory cues. Moreover, LI gamma values were negatively correlated with the percentage of correct responses in emotion recognition. Such observations could be explained by a deficit in UCI children in engaging the left hemisphere for more demanding emotional task, or alternatively by a higher conscious elaboration in UCI than NH children. Additionally, for the UCI group, there was no difference between the CI side and the contralateral side in gamma activity, but a higher gamma activity in the right in comparison to the left hemisphere was found. Therefore, the CI side did not appear to influence the physiologic hemispheric lateralization of emotional processing. Finally, a negative correlation was shown between the age at the CI surgery and the percentage of correct responses in emotion recognition and then suggesting the occurrence of a sensitive period for CI surgery for best emotion recognition skills development
    • 

    corecore