12,147 research outputs found

    Current Challenges and Visions in Music Recommender Systems Research

    Full text link
    Music recommender systems (MRS) have experienced a boom in recent years, thanks to the emergence and success of online streaming services, which nowadays make available almost all music in the world at the user's fingertip. While today's MRS considerably help users to find interesting music in these huge catalogs, MRS research is still facing substantial challenges. In particular when it comes to build, incorporate, and evaluate recommendation strategies that integrate information beyond simple user--item interactions or content-based descriptors, but dig deep into the very essence of listener needs, preferences, and intentions, MRS research becomes a big endeavor and related publications quite sparse. The purpose of this trends and survey article is twofold. We first identify and shed light on what we believe are the most pressing challenges MRS research is facing, from both academic and industry perspectives. We review the state of the art towards solving these challenges and discuss its limitations. Second, we detail possible future directions and visions we contemplate for the further evolution of the field. The article should therefore serve two purposes: giving the interested reader an overview of current challenges in MRS research and providing guidance for young researchers by identifying interesting, yet under-researched, directions in the field

    Automatic estimation of harmonic tension by distributed representation of chords

    Full text link
    The buildup and release of a sense of tension is one of the most essential aspects of the process of listening to music. A veridical computational model of perceived musical tension would be an important ingredient for many music informatics applications. The present paper presents a new approach to modelling harmonic tension based on a distributed representation of chords. The starting hypothesis is that harmonic tension as perceived by human listeners is related, among other things, to the expectedness of harmonic units (chords) in their local harmonic context. We train a word2vec-type neural network to learn a vector space that captures contextual similarity and expectedness, and define a quantitative measure of harmonic tension on top of this. To assess the veridicality of the model, we compare its outputs on a number of well-defined chord classes and cadential contexts to results from pertinent empirical studies in music psychology. Statistical analysis shows that the model's predictions conform very well with empirical evidence obtained from human listeners.Comment: 12 pages, 4 figures. To appear in Proceedings of the 13th International Symposium on Computer Music Multidisciplinary Research (CMMR), Porto, Portuga

    Action-based effects on music perception

    Get PDF
    The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral processes. In contrast, embodied accounts of music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework that captures the ways in which the human motor system and its actions can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory, postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modeling), and vice versa, to predict the sensory outcomes of planned actions (forward modeling). Embodied accounts typically refer to inverse modeling to explain action effects on music perception (Leman, 2007). We extend this account by pinpointing forward modeling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system and its actions suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music) cognition in the sense that it needs to be considered as a dynamical process, in which aspects of action, perception, introspection, and social interaction are of crucial importance

    Single chords convey distinct emotional qualities to both naĂŻve and expert listeners.

    Get PDF
    Previous research on music and emotions has been able to pinpoint many structural features conveying emotions. Empirical research on vertical harmony’s emotional qualities, however, has been rare. The main studies in harmony and emotions usually concern the horizontal aspects of harmony, ignoring emotional qualities of chords as such. An empirical experiment was conducted where participants (N = 269) evaluated pre-chosen chords on a 9-item scale of given emotional dimensions. 14 different chords (major, minor, diminished, augmented triads and dominant, major and minor seventh chords with inversions) were played with two distinct timbres (piano and strings). The results suggest significant differences in emotion perception across chords. These were consistent with notions about musical conventions, while providing novel data on how seventh chords affect emotion perception. The inversions and timbre also contributed to the evaluations. Moreover, certain chords played on the strings scored moderately high on the dimension of ‘nostalgia/longing,’ which is usually held as a musical emotion rising only from extra-musical connotations and conditioning, not intrinsically from the structural features of the music. The role of background variables to the results was largely negligible, suggesting the capacity of vertical harmony to convey distinct emotional qualities to both naïve and expert listeners

    Tempo and intensity of pre-task music modulate neural activity during reactive task performance

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2013 The Authors.Research has shown that not only do young athletes purposively use music to manage their emotional state (Bishop, Karageorghis, & Loizou, 2007), but also that brief periods of music listening may facilitate their subsequent reactive performance (Bishop, Karageorghis, & Kinrade, 2009). We report an fMRI study in which young athletes lay in an MRI scanner and listened to a popular music track immediately prior to performance of a three-choice reaction time task; intensity and tempo were modified such that six excerpts (2 intensities × 3 tempi) were created. Neural activity was measured throughout. Faster tempi and higher intensity collectively yielded activation in structures integral to visual perception (inferior temporal gyrus), allocation of attention (cuneus, inferior parietal lobule, supramarginal gyrus), and motor control (putamen), during reactive performance. The implications for music listening as a pre-competition strategy in sport are discussed

    Beyond pitch/duration scoring: Towards a system dynamics model of electroacoustic music

    Get PDF
    Based on a hierarchy of discrete pitches and metrically sub-divisible duration, Western tonal art music is usually modelled through printed music scores. Scoring acoustic musical events beyond this paradigm has resulted in non-standard graphs in two dimensions. New digitally generated ‘soundscape’ forms are often not conceived or understandable within traditional musical paradigms or notation models, and often explore attributes of music such as spatial processing that fall outside two- dimensional graphic scoring. To date there is not a commonly accepted model that approximates the structural dynamics of electroacoustic music; providing a conceptual framework independent of the music to the degree of standard music notation. Based on recent work in spectro-morphology as a way of explaining sound shapes, a systems dynamics model is proposed through mapping a dynamic taxonomy for structural listening as an aid to composition. This approach captures formal but not semiotic discourse

    Gender differences in the temporal voice areas

    Get PDF
    There is not only evidence for behavioral differences in voice perception between female and male listeners, but also recent suggestions for differences in neural correlates between genders. The fMRI functional voice localizer (comprising a univariate analysis contrasting stimulation with vocal versus non-vocal sounds) is known to give robust estimates of the temporal voice areas (TVAs). However there is growing interest in employing multivariate analysis approaches to fMRI data (e.g. multivariate pattern analysis; MVPA). The aim of the current study was to localize voice-related areas in both female and male listeners and to investigate whether brain maps may differ depending on the gender of the listener. After a univariate analysis, a random effects analysis was performed on female (n = 149) and male (n = 123) listeners and contrasts between them were computed. In addition, MVPA with a whole-brain searchlight approach was implemented and classification maps were entered into a second-level permutation based random effects models using statistical non-parametric mapping (SnPM; Nichols & Holmes 2002). Gender differences were found only in the MVPA. Identified regions were located in the middle part of the middle temporal gyrus (bilateral) and the middle superior temporal gyrus (right hemisphere). Our results suggest differences in classifier performance between genders in response to the voice localizer with higher classification accuracy from local BOLD signal patterns in several temporal-lobe regions in female listeners
    • 

    corecore