14,171 research outputs found
Methodological considerations concerning manual annotation of musical audio in function of algorithm development
In research on musical audio-mining, annotated music databases are needed which allow the development of computational tools that extract from the musical audiostream the kind of high-level content that users can deal with in Music Information Retrieval (MIR) contexts. The notion of musical content, and therefore the notion of annotation, is ill-defined, however, both in the syntactic and semantic sense. As a consequence, annotation has been approached from a variety of perspectives (but mainly linguistic-symbolic oriented), and a general methodology is lacking. This paper is a step towards the definition of a general framework for manual annotation of musical audio in function of a computational approach to musical audio-mining that is based on algorithms that learn from annotated data. 1
Revealing spatio-spectral electroencephalographic dynamics of musical mode and tempo perception by independent component analysis.
BackgroundMusic conveys emotion by manipulating musical structures, particularly musical mode- and tempo-impact. The neural correlates of musical mode and tempo perception revealed by electroencephalography (EEG) have not been adequately addressed in the literature.MethodThis study used independent component analysis (ICA) to systematically assess spatio-spectral EEG dynamics associated with the changes of musical mode and tempo.ResultsEmpirical results showed that music with major mode augmented delta-band activity over the right sensorimotor cortex, suppressed theta activity over the superior parietal cortex, and moderately suppressed beta activity over the medial frontal cortex, compared to minor-mode music, whereas fast-tempo music engaged significant alpha suppression over the right sensorimotor cortex.ConclusionThe resultant EEG brain sources were comparable with previous studies obtained by other neuroimaging modalities, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). In conjunction with advanced dry and mobile EEG technology, the EEG results might facilitate the translation from laboratory-oriented research to real-life applications for music therapy, training and entertainment in naturalistic environments
Predictive information in Gaussian processes with application to music analysis
This is the author's accepted manuscript of this article. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-40020-9.Lecture Notes in Computer ScienceLecture Notes in Computer ScienceWe describe an information-theoretic approach to the analysis of sequential data, which emphasises the predictive aspects of perception, and the dynamic process of forming and modifying expectations about an unfolding stream of data, characterising these using a set of process information measures. After reviewing the theoretical foundations and the definition of the predictive information rate, we describe how this can be computed for Gaussian processes, including how the approach can be adpated to non-stationary processes, using an online Bayesian spectral estimation method to compute the Bayesian surprise. We finish with a sample analysis of a recording of Steve Reich’s Drummin
Transient Analysis for Music and Moving Images: Consideration for Television Advertising
In audiovisual composition, coupling montage moving images with music is common practice. Interpretation of the effect on an audioviewer's consequent interpretation of the composition is discursive and unquantified. Meth-odology for evaluating the audiovisual multimodal inter-activity is proposed, developing an analysis procedure via the study of modality interdependent transient structures, explained as forming the foundation of perception via the concept of Basic Exposure response to the stimulus. The research has implications for analysis of all audiovisual media, with practical implications in television advertis-ing as a discrete typology of target driven audiovisual presentation. Examples from contemporary advertising are used to explore typical transient interaction patterns and the consequences of which are discussed from the practical viewpoint of the audiovisual composer
Recommended from our members
Spring School on Language, Music, and Cognition: Organizing Events in Time
The interdisciplinary spring school “Language, music, and cognition: Organizing events in time” was held from February 26 to March 2, 2018 at the Institute of Musicology of the University of Cologne. Language, speech, and music as events in time were explored from different perspectives including evolutionary biology, social cognition, developmental psychology, cognitive neuroscience of speech, language, and communication, as well as computational and biological approaches to language and music. There were 10 lectures, 4 workshops, and 1 student poster session.
Overall, the spring school investigated language and music as neurocognitive systems and focused on a mechanistic approach exploring the neural substrates underlying musical, linguistic, social, and emotional processes and behaviors. In particular, researchers approached questions concerning cognitive processes, computational procedures, and neural mechanisms underlying the temporal organization of language and music, mainly from two perspectives: one was concerned with syntax or structural representations of language and music as neurocognitive systems (i.e., an intrapersonal perspective), while the other emphasized social interaction and emotions in their communicative function (i.e., an interpersonal perspective). The spring school not only acted as a platform for knowledge transfer and exchange but also generated a number of important research questions as challenges for future investigations
Relationships between musical features and music-evoked emotions and memories
Tavoitteet
Musiikin sosioemotionaaliset terveyshyödyt on tunnettu jo kauan. Erityisesti musiikin kyky herättää tunteita on kiinnittänyt tutkijoiden huomion tunteiden ja musiikin ominaisuuksien välisiin yhteyksiin. Tunnekokemuksen voimakkuuden tiedetään myös olevan yhteydessä niin omaelämäkerrallisten muistojen vahvistumiseen kuin niiden muistista hakemiseenkin. Tiedetään, että musiikkia sekä omaelämäkerrallisia muistoja prosessoidaan suurelta osin samoilla aivoalueilla, erityisesti etuotsalohkon keskiosassa. Musiikin ominaisuuksien ja omaelämäkerrallisten muistojen välistä yhteyttä ei ole kuitenkaan tutkittu aiemmin. Tämän tutkimuksen ensimmäisenä tutkimuskysymyksenä on se, voiko kappaleen omaelämäkerrallista tärkeyttä selittää joillakin musiikin piirteillä. Toisena tutkimuskysymyksenä on selvittää, onko tunnekokemuksella, ja erityisesti sen voimakkuudella, välittävä vaikutus tähän yhteyteen.
Menetelmät
Koehenkilöt (n =113, 86 naista) olivat terveitä, iältään 60-86 vuotiaita aikuisia (M = 70.72, SD = 5.39). He kuuntelivat kokeen aikana 70 katkelmaa musiikkikappaleista ja arvioivat jokaista kappaletta sen herättämän vireystilamuutoksen, tunnekokemuksen voimakkuuden, miellyttävyyden, tuttuuden ja omaelämäkerrallisten muistojen suhteen. Kappaleiden musiikilliset piirteet eriytettiin käyttämällä musiikkitiedonhakuohjelmistoa, jonka jälkeen niihin sovellettiin pääkomponenttianalyysia. Lopuksi, regressioanalyysia käytettiin selvittämään muodotettujen pääkomponenttien ja koehenkilöiden tekemien arvioiden välistä yhteyttä.
Päätulokset ja yhteenveto
Vähäisempi pulssin voimakkuus, soinnillinen kirkkaus ja alemmissa keskitason taajuuksissa tapahtuva fluktuaatio olivat parhaita selittäjiä suuremmalle omaelämäkerralliselle tärkeydelle, kappaleen tuttuudelle ja tunnekokemuksille, joita kappaleet herättivät. Tunnekokemuksen voimakkuus, ja osittain myös miellyttävyys, välittivät musiikillisten piirteiden vaikutusta kappaleen autobiografiseen tärkeyteen. Nämä tulokset lisäävät toistaiseksi vähäistä tietoa musiikin herättämistä omaelämäkerrallisista muistoista tiettyjen musiikillisten piirteiden kontekstissa.Objectives
Socioemotional health benefits of music have been recognized for a long time. Especially the ability of music to evoke emotions has led researchers to pay attention to relationships between emotions and specific properties of music. Emotional intensity is also known to be linked to more efficient consolidation and recall of autobiographical memories. Music and autobiographical memories are known to be largely processed by the same neural system, especially in the medial prefrontal cortex. However, the relationship between musical properties and music-evoked autobiographical memories (MEAM) has not been studied before. The first research question of this study was that can some acoustic (musical) features explain the autobiographical salience of the song. The second research question was to determine if that relationship is mediated by subjective emotions evoked by the song, especially the intensity of evoked emotions.
Methods
Participants (n =113, 86 females) were healthy older adults aged between 60 and 86 years (M = 70.72, SD = 5.39). Participants listened 70 song excerpts during the experiment and rated them on valence, arousal, emotional intensity, familiarity, and autobiographical memories evoked by the song. The musical features of the songs were extracted using music information retrieval (MIR) software, followed by principal component analysis. The relationship between musical features and listeners' ratings was assessed using regression analyses.
Main results and conclusions
Lower pulse strength, brightenss, and fluctuation in low-middle frequencies were the best predictors of higher autobiographical salience, familiarity and emotional responses evoked by the songs. The intensity of emotions and, to lesser extent, pleasantness had a mediative effect on the relationship between musical features and autobiographical salience. These results add to the still scarce knowledge about MEAMs in the context of specific musical features
- …