Emotions play a fundamental role in human communication. Particularly music and films are capable of eliciting emotions which unfold and vary over time. However, in order to communicate emotions with sounds, a) subjects should consistently and reliably associate the sound with a certain emotion, independent of what happened before and b) sounds should be perceived similarly by different subjects. We presented subjects with a series of sounds from the International Affective Digitized Sounds database which changed every 5 seconds. Listeners rated the elicited valence and arousal using the real time measurement software EMuJoy. After an initial training they rated the same sound sequence twice the first day and once on the following day. We also played the sounds of this sequence in reverse order to investigate context dependence and possible series effects. We found high intra-rater correlations of 0.79 (IQR: 0.13) for valence and 0.77 (IQR: 0.10) for arousal. We found no significant effect of the order, in which the sounds were presented. Inter-rater correlations were still at about 0.60 (IQR: 0.23) for valence and 0.52 (IQR: 0.27) for arousal. No series effects have been found. Elicited emotions were generally more consistent for extreme values of valence and arousal. Thus at least these sounds could be used to reliably communicate emotions. However, there may be other stimuli which require less interpretation and thus are more suitable for fast and reliable communication of emotions
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.