503 research outputs found

    Crowdsourcing Linked Data on listening experiences through reuse and enhancement of library data

    Get PDF
    Research has approached the practice of musical reception in a multitude of ways, such as the analysis of professional critique, sales figures and psychological processes activated by the act of listening. Studies in the Humanities, on the other hand, have been hindered by the lack of structured evidence of actual experiences of listening as reported by the listeners themselves, a concern that was voiced since the early Web era. It was however assumed that such evidence existed, albeit in pure textual form, but could not be leveraged until it was digitised and aggregated. The Listening Experience Database (LED) responds to this research need by providing a centralised hub for evidence of listening in the literature. Not only does LED support search and reuse across nearly 10,000 records, but it also provides machine-readable structured data of the knowledge around the contexts of listening. To take advantage of the mass of formal knowledge that already exists on the Web concerning these contexts, the entire framework adopts Linked Data principles and technologies. This also allows LED to directly reuse open data from the British Library for the source documentation that is already published. Reused data are re-published as open data with enhancements obtained by expanding over the model of the original data, such as the partitioning of published books and collections into individual stand-alone documents. The database was populated through crowdsourcing and seamlessly incorporates data reuse from the very early data entry phases. As the sources of the evidence often contain vague, fragmentary of uncertain information, facilities were put in place to generate structured data out of such fuzziness. Alongside elaborating on these functionalities, this article provides insights into the most recent features of the latest instalment of the dataset and portal, such as the interlinking with the MusicBrainz database, the relaxation of geographical input constraints through text mining, and the plotting of key locations in an interactive geographical browser

    Inside the Loop: The Audio Functionality of Inside

    Get PDF
    The manner in which soundscapes evolve and change during gameplay can have many implications regarding player experience. INSIDE (Playdead in INSIDE. Released on Microsoft Windows, Playstation 4, Xbox One, Nintendo Switch and iOS, 2016) features a gameplay section in which rhythmic audio cues loop continuously both during gameplay and after player death. This paper uses this aspect of the soundtrack as a case study, examining the effects of looping sound effects and abstract musical cues on player immersion, ludic functionality, and episodic engagement. The concept of spectromorphology proposed by Smalley (Organised Sound 2(2):107–126, 1997) is used to analyse the way in which musical cues can retain ludic functionality and promote immersion in the absence of diegetic sound design. The “musical suture” (Kamp, in: Ludomusicology: approaches to video game music, Equinox, Sheffield, 2016) created by continuously looping audio during death and respawn is also examined with regards to immersing the player within an evolving soundscape

    From Motion to Emotion : Accelerometer Data Predict Subjective Experience of Music

    Get PDF
    Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems

    Mapping a beautiful voice : theoretical considerations

    Get PDF
    The prime purpose of this paper is to draw on a range of diverse literatures to clarify those elements thatare perceived to constitute a ‘beautiful’ sung performance. The text rehearses key findings from existingliteratures in order to determine the extent to which particular elements might appear the most salientfor an individual listener and also ‘quantifiable’ (in the sense of being open to empirical study). Thepaper concludes with a theoretical framework for the elements that are likely to construct and shape ourresponses to particular sung performances

    Reciprocal Modulation of Cognitive and Emotional Aspects in Pianistic Performances

    Get PDF
    Background: High level piano performance requires complex integration of perceptual, motor, cognitive and emotive skills. Observations in psychology and neuroscience studies have suggested reciprocal inhibitory modulation of the cognition by emotion and emotion by cognition. However, it is still unclear how cognitive states may influence the pianistic performance. The aim of the present study is to verify the influence of cognitive and affective attention in the piano performances. Methods and Findings: Nine pianists were instructed to play the same piece of music, firstly focusing only on cognitive aspects of musical structure (cognitive performances), and secondly, paying attention solely on affective aspects (affective performances). Audio files from pianistic performances were examined using a computational model that retrieves nine specific musical features (descriptors) - loudness, articulation, brightness, harmonic complexity, event detection, key clarity, mode detection, pulse clarity and repetition. In addition, the number of volunteers' errors in the recording sessions was counted. Comments from pianists about their thoughts during performances were also evaluated. The analyses of audio files throughout musical descriptors indicated that the affective performances have more: agogics, legatos, pianos phrasing, and less perception of event density when compared to the cognitive ones. Error analysis demonstrated that volunteers misplayed more left hand notes in the cognitive performances than in the affective ones. Volunteers also played more wrong notes in affective than in cognitive performances. These results correspond to the volunteers' comments that in the affective performances, the cognitive aspects of piano execution are inhibited, whereas in the cognitive performances, the expressiveness is inhibited. Conclusions: Therefore, the present results indicate that attention to the emotional aspects of performance enhances expressiveness, but constrains cognitive and motor skills in the piano execution. In contrast, attention to the cognitive aspects may constrain the expressivity and automatism of piano performances.Brazilian government research agency: Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)[08/54844-7]Brazilian government research agency: Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)[07/59826-4

    From Sound to Significance: Exploring the Mechanisms Underlying Emotional Reactions to Music

    Get PDF
    A common approach to studying emotional reactions to music is to attempt to obtain direct links between musical surface features such as tempo and a listener’s responses. however, such an analysis ultimately fails to explain why emotions are aroused in the listener. in this article we explore an alternative approach, which aims to account for musical emotions in terms of a set of psychological mechanisms that are activated by different types of information in a musical event. this approach was tested in 4 experiments that manipulated 4 mechanisms (brain stem reflex, contagion, episodic memory, musical expectancy) by selecting existing musical pieces that featured information relevant for each mechanism. the excerpts were played to 60 listeners, who were asked to rate their felt emotions on 15 scales. skin conductance levels and facial expressions were measured, and listeners reported subjective impressions of relevance to specific mechanisms. results indicated that the target mechanism conditions evoked emotions largely as predicted by a multimechanism framework and that mostly similar effects occurred across the experiments that included different pieces of music. we conclude that a satisfactory account of musical emotions requires consideration of how musical features and responses are mediated by a range of underlying mechanisms

    Collaborative creativity: The Music Room

    Get PDF
    In this paper, we reflect on our experience of designing, developing and evaluating interactive spaces for collaborative creativity. In particular, we are interested in designing spaces which allow everybody to compose and play original music. The Music Room is an interactive installation where couples can compose original music by moving in the space. Following the metaphor of love, the music is automatically generated and modulated in terms of pleasantness and intensity, according to the proxemics cues extracted from the visual tracking algorithm. The Music Room was exhibited during the EU Researchers' Night in Trento, Italy

    Locus of emotion influences psychophysiological reactions to music

    Get PDF
    It is now widely accepted that the perception of emotional expression in music can be vastly different from the feelings evoked by it. However, less understood is how the locus of emotion affects the experience of music, that is how the act of perceiving the emotion in music compares with the act of assessing the emotion induced in the listener by the music. In the current study, we compared these two emotion loci based on the psychophysiological response of 40 participants listening to 32 musical excerpts taken from movie soundtracks. Facial electromyography, skin conductance, respiration and heart rate were continuously measured while participants were required to assess either the emotion expressed by, or the emotion they felt in response to the music. Using linear mixed effects models, we found a higher mean response in psychophysiological measures for the “perceived” than the “felt” task. This result suggested that the focus on one’s self distracts from the music, leading to weaker bodily reactions during the “felt” task. In contrast, paying attention to the expression of the music and consequently to changes in timbre, loudness and harmonic progression enhances bodily reactions. This study has methodological implications for emotion induction research using psychophysiology and the conceptualization of emotion loci. Firstly, different tasks can elicit different psychophysiological responses to the same stimulus and secondly, both tasks elicit bodily responses to music. The latter finding questions the possibility of a listener taking on a purely cognitive mode when evaluating emotion expression

    Recognizing Emotions in a Foreign Language

    Get PDF
    Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables
    • …
    corecore