690 research outputs found

    Gesture-sound causality from the audience’s perspective: : investigating the aesthetic experience of performances with digital musical instruments.

    Get PDF
    In contrast to their traditional, acoustic counterparts, digital musical instruments (DMIs) rarely feature a clear, causal relationship between the performer’s actions and the sounds produced. They often function simply as systems for controlling digital sound synthesis, triggering computer-generated audio. This study aims to shed light on how the level of perceived causality of DMI designs impacts audience members’ aesthetic responses to new DMIs. In a preliminary survey, 49 concert attendees listed adjectives that described their experience of a number of DMI performances. In a subsequent experiment, 31 participants rated video clips of performances with DMIs with causal and acausal mapping designs using the eight most popular adjectives from the preliminary survey. The experimental stimuli were presented in their original version and in a manipulated version with a reduced level of gesture-sound causality. The manipulated version was created by placing the audio track of one section of the recording over the video track of a different section. It was predicted that the causal DMIs would be rated more positively, with the manipulation having a stronger effect on the ratings for the causal DMIs. Our results confirmed these hypotheses, and indicate that a lack of perceptible causality does have a negative impact on ratings of DMI performances. The acausal group received no significant difference in ratings between original and manipulated clips. We posit that this result arises from the greater understanding that clearer gesture-sound causality offers spectators. The implications of this result for DMI design and practice are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved

    Towards automatic music recommendation for audio branding scenarios

    Get PDF
    Within the MIR community, most prediction models of musical impact on listeners focus on mood or emotional effects (perceived or induced). The ABC_DJ project investigates the associative impact of music on listeners from the specific perspective of music branding that surrounds us in our everyday lives. We pre-sent a general concept for applying automatic music rec-ommendation within this domain. Creating a scientifically validated basic terminology for communicating brand at-tributes and human emotions in this field is the key chal-lenge. As a first result, we introduce the Music Branding Expert Terminology (MBET), a comprehensive terminology of verbal attributes used in music branding, upon which a pre-diction model will be developed to facilitate automatic mu-sic recommendation in the context of music branding.EC/H2020/688122/EU/Artist-to-Business-to-Business-to-Consumer Audio Branding System/ABC D

    Sound, materiality and embodiment challenges for the concept of 'musical expertise' in the age of digital mediatization

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Within academic music research, 'musical expertise' is often employed as a 'moderator variable' when conducting empirical studies on music listening. Prevalent conceptualizations typically conceive of it as a bundle of cognitive skills acquired through formal musical education. By implicitly drawing on the paradigm of the Western classical live concert, this ignores that for most people nowadays, the term 'music' refers to electro-acoustically generated sound waves rendered by audio or multimedia electronic devices. Hence, our article tries to challenge the traditional musicologist's view by drawing on empirical findings from three more recent music-related research lines that explicitly include the question of media playback technologies. We conclude by suggesting a revised musical expertise concept that extends from the traditional dimensions and also incorporates expertise gained through ecological perception, material practice and embodied listening experiences in the everyday. Altogether, our contribution shall draw attention to growing convergences between musicology and media and communications research

    Development and Evaluation of an Interface with Four-Finger Pitch Selection

    Get PDF
    In this paper we present an interface for digital musical instruments which is primarily designed for playing mono-phonic melody synthesizers. The hand-held device allows the pitch selection with four valve-like metal mechanics and three octave switches. Note events are triggered with a wooden excitation pad, operated with the second hand. Another feature is the advanced aftertouch of the four me- chanics and the pad, which enables expressive playing. In a user experiment, the controller is compared to a classic MIDI keyboard, regarding the time needed for responding to simple visual stimuli and the mean error rate produced in that task. The results show no significant difference in the response time but a higher error rate for the novel in- terface for untrained users. Outcome of this work is a list of necessary improvements, as well as a plan for further experiments

    A comparative analysis of phenotype expression in human osteoblasts from heterotopic ossification and normal bone

    Get PDF
    Background and aims: Heterotopic ossification (HO) is a pathological bone formation process in which ectopic bone is formed in soft tissue. The formation of bone depends on the expression of the osteoblast phenotype. Earlier studies have shown conflicting results on the expression of phenotype markers of cells originating from HO and normal bone. The hypothesis of the present study is that cells from HO show an altered expression of osteoblast-specific phenotype markers compared to normal osteoblasts. The aims of the study were to further characterize the expression of osteoblast phenotypemarkers and to provide a comparison with other study results. Patients and methods: Using an in vitro technique, reverse transcription polymerase chain reaction (RT-PCR), real-time PCR and immunohistochemistry, we compared the phenotype gene expression (type I collagen, alkaline phosphatase, Cbfa-1, osteocalcin) of osteoblasts from resected HO and normal bone (iliac crest). Results: Cells from HO expressed the osteoblast phenotype (type I collagen, alkaline phosphatase) but were characterized by a depleted osteocalcin expression. The expression of Cbfa-1 (osteocalcin transcription gene) showed a large variety in our study. Preoperative radiotherapy had no effect on phenotype expression in cells from HO. Conclusion: Our results provide a characterization of cells originating from HO and support the thesis of an impaired osteoblast differentiation underlying the formation of HO. The transcription axis from Cbfa-1 to osteocalcin could be involved in the pathogenesis of H

    From Motion to Emotion : Accelerometer Data Predict Subjective Experience of Music

    Get PDF
    Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems

    What do you think this is? "Conceptual uncertainty" in geoscience interpretation

    No full text
    Interpretations of seismic images are used to analyze sub-surface geology and form the basis for many exploration and extraction decisions, but the uncertainty that arises from human bias in seismic data interpretation has not previously been quantified. All geological data sets are spatially limited and have limited resolution. Geoscientists who interpret such data sets must, therefore, rely upon their previous experience and apply a limited set of geological concepts. We have documented the range of interpretations to a single data set, and in doing so have quantified the �conceptual uncertainty� inherent in seismic interpretation. In this experiment, 412 interpretations of a synthetic seismic image were analyzed. Only 21% of the participants interpreted the �correct� tectonic setting of the original model, and only 23% highlighted the three main fault strands in the image. These results illustrate that conceptual uncertainty exists, which in turn explains the large range of interpretations that can result from a single data set. We consider the role of prior knowledge in biasing individuals in their interpretation of the synthetic seismic section, and our results demonstrate that conceptual uncertainty has a critical influence on resource exploration and other areas of geoscience. Practices should be developed to minimize the effects of conceptual uncertainty, and it should be accounted for in risk analysis

    Induced Empathy Moderates Emotional Responses to Expressive Qualities in Music

    Get PDF
    Recent research has explored the role of empathy in the context of music listening. Here, through an “empathy priming paradigm”, situational empathy was shown to act as a causal mechanism in inducing emotion, however the way empathy was primed had low levels of ecological validity. We, therefore, conducted an online experiment to explore the extent to which information about a composer’s expressive intentions when writing a piece of music would significantly affect the degree to which participants reportedly empathise with the composer and in turn influence emotional responses to expressive music. A total of 229 participants were randomly assigned to three groups. The experimental group read short texts describing the emotions felt by the composer during the process of composition. To control for the effect of text regardless of its content, one control group read texts describing the characteristics of the music they were to hear, and a second control group was not given any textual information. Participants listened to 30 second excerpts of four pieces of music, selected to express emotions from the four quadrants of the circumplex theory of emotion. Having heard each music excerpt, participants rated the valence and arousal they experienced and completed a measure of situational empathy. Results show that situational empathy in response to music is significantly associated with trait empathy. As opposed to those in the control conditions, participants in the experimental group responded with significantly higher levels of situational empathy. Receiving this text significantly moderated the effect of the expressiveness of stimuli on induced emotion, indicating that it induced empathy. We conclude that empathy can be induced during music listening through the provision of information about the specific emotions of a person relating to the music. These findings contribute to an understanding of the psychological mechanisms that underlie emotional responses to music
    corecore