38 research outputs found

    Diffusion map for clustering fMRI spatial maps extracted by independent component analysis

    Full text link
    Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering. In this research, we used the recently developed diffusion map for dimensionality reduction in conjunction with spectral clustering. This research revealed that the diffusion map based clustering worked as well as the more traditional methods, and produced more compact clusters when needed.Comment: 6 pages. 8 figures. Copyright (c) 2013 IEEE. Published at 2013 IEEE International Workshop on Machine Learning for Signal Processin

    A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics

    Get PDF
    Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions

    A functional MRI study of happy and sad emotions in music with and without lyrics

    Get PDF
    Musical emotions,such as happiness and sadness,have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related toperceptual brightness,whereassadmusicwithlyricsdidnotdivergefromhappymusicwithoutlyrics,indicatingtheroleofotherfactorsinemotionclassification.Behavioralratingsrevealedthathappymusicwithoutlyricsinducedstrongerpositiveemotionsthanhappymusicwithlyrics.Wealsoacquiredfunctionalmagneticres-onanceimagingdatawhilesubjectsperformedaffectivetasksregardingthemusic.First,usingecologicalandacousticallyvariablestimuli,webroadenedpreviousfindingsaboutthebrainprocessingofmusicalemotionsandofsongsversusinstrumentalmusic.Addition-ally,contrastsbetweensadmusicwithversuswithoutlyricsrecruitedtheparahippocampalgyrus,theamygdala,theclaustrum,theputamen,theprecentralgyrus,themedialandinfe-riorfrontalgyri(includingBroca’sarea),andtheauditorycortex,whilethereversecontrastproducednoactivations.Happymusicwithoutlyricsactivatedstructuresofthelimbicsys-temandtherightparsopercularisoftheinferiorfrontalgyrus,whereasauditoryregionsalonerespondedtohappymusicwithlyrics.Thesefindingspointtotheroleofacousticcuesfortheexperienceofhappinessinmusicandtotheimportanceoflyricsforsadmusicalemotions

    Acoustic, neural, and perceptual correlates of polyphonic timbre

    No full text

    Effect of Enculturation on the Semantic and Acoustic Correlates of Polyphonic Timbre

    No full text
    Polyphonic timbre perception was investigated in a cross-cultural context wherein Indian and Western nonmusicians rated short Indian and Western popular music excerpts (1.5 s, n = 200) on eight bipolar scales. Intrinsic dimensionality estimation revealed a higher number of perceptual dimensions in the timbre space for music from one’s own culture. Factor analyses of Indian and Western participants’ ratings resulted in highly similar factor solutions. The acoustic features that predicted the perceptual dimensions were similar across the two participant groups. Furthermore, both the perceptual dimensions and their acoustic correlates matched closely with the results of a previous study performed using Western musicians as participants. Regression analyses revealed relatively well performing models for the perceptual dimensions. The models displayed relatively high cross-validation performance. The findings suggest the presence of universal patterns in polyphonic timbre perception while demonstrating the increase of dimensionality of timbre space as a result of enculturation.peerReviewe

    In search of perceptual and acoustical correlates of polyphonic timbre

    No full text
    Polyphonic timbre refers to the overall timbre mixture of a music signal, or in simple words, the 'global sound' of any piece of music. It has been proven to be an important element for computational categorization according to genre, style, mood, and emotions, but its perceptual constituents have been less investigated. The aim of the study is to determine the most salient features of polyphonic timbre perception by investigating the descriptive auditory qualities of music and mapping acoustic features to these descriptors. Descriptors of monophonic timbre taken from previous literature were used as a starting point. Based on three pilot studies, eight scales were chosen for the actual experiment. Short musical excerpts from Indian popular music were rated on these scales. Relatively high agreement between the participants’ ratings was observed. A factor analysis of the scales suggested three perceptual dimensions. Acoustic descriptors were computationally extracted from each stimulus using signal processing and correlated with the perceptual dimensions. The present findings imply that there may be regularities and patterns in the way people perceive polyphonic timbre. Furthermore, most of the descriptors can be predicted relatively well by the acoustical features of the music. Finally the results suggest that spectrotemporal modulations are most relevant in the perception of polyphonic timbre

    Timbre and affect dimensions: Evidence from affect and similarity ratings and acoustic correlates of isolated instrument sounds

    No full text
    Considerable effort has been made towards understanding how acoustic and structural features contribute to emotional expression in music, but relatively little attention has been paid to the role of timbre in this process. Our aim was to investigate the role of timbre in the perception of affect dimensions in isolated musical sounds, by way of three behavioral experiments. In Experiment 1, participants evaluated perceived affects of 110 instrument sounds that were equal in duration, pitch, and dynamics using a three-dimensional affect model (valence, energy arousal, and tension arousal) and preference and emotional intensity. In Experiment 2, an emotional dissimilarity task was applied to a subset of the instrument sounds used in Experiment 1 to better reveal the underlying affect structure. In Experiment 3, the perceived affect dimensions as well as preference and intensity of a new set of 105 instrument sounds were rated by participants. These sounds were also uniform in pitch, duration, and playback dynamics but contained systematic manipulations in the dynamics of sound production, articulation, and ratio of high-frequency to low-frequency energy. The affect dimensions for all the experiments were then explained in terms of the three kinds of acoustic features extracted: spectral (e.g., ratio of high-frequency to low-frequency energy), temporal (e.g., attack slope), and spectrotemporal (e.g., spectral flux). High agreement among the participants’ ratings across the experiments suggested that even isolated instrument sounds contain cues that indicate affective expression, and these are recognized as such by the listeners. A dominant portion (50-57%) of the two dimensions of affect (valence and energy arousal) could be predicted by linear combinations of few acoustic features such as ratio of high-frequency to low-frequency energy, attack slope, and spectral regularity. Links between these features and those observed in the vocal expression of affects and other sound phenomena are discussed.peerReviewe
    corecore