16,148 research outputs found

    Toward an ecological conception of timbre

    Get PDF
    This paper is part of a series in which we had worked in the last 6 months, and, specifically, intend to investigate the notion of timbre through the ecological perspective proposed by James Gibson in his Theory of Direct Perception. First of all, we discussed the traditional approach to timbre, mainly as developed in acoustics and psychoacoustics. Later, we proposed a new conception of timbre that was born in concepts of ecological approach. The ecological approach to perception proposed by Gibson (1966, 1979) presupposes a level of analysis of perceptual stimulated that includes, but is quite broader than the usual physical aspect. Gibson suggests as focus the relationship between the perceiver and his environment. At the core of this approach, is the notion of affordances, invariant combinations of properties at the ecological level, taken with reference to the anatomy and action systems of species or individual, and also with reference to its biological and social needs. Objects and events are understood as relates to a perceiving organism by the meaning of structured information, thus affording possibilities of action by the organism. Event perception aims at identifying properties of events to specify changes of the environment that are relevant to the organism. The perception of form is understood as a special instance of event perception, which is the identity of an object depends on the nature of the events in which is involved and what remains invariant over time. From this perspective, perception is not in any sense created by the brain, but is a part of the world where information can be found. Consequently, an ecological approach represents a form of direct realism that opposes the indirect realist based on predominant approaches to perception borrowed from psychoacoustics and computational approach

    It's not what you play, it's how you play it: timbre affects perception of emotion in music.

    Get PDF
    Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18-30 years, 24 older adults aged 58-75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors

    Multiple-F0 estimation of piano sounds exploiting spectral structure and temporal evolution

    Get PDF
    This paper proposes a system for multiple fundamental frequency estimation of piano sounds using pitch candidate selection rules which employ spectral structure and temporal evolution. As a time-frequency representation, the Resonator Time-Frequency Image of the input signal is employed, a noise suppression model is used, and a spectral whitening procedure is performed. In addition, a spectral flux-based onset detector is employed in order to select the steady-state region of the produced sound. In the multiple-F0 estimation stage, tuning and inharmonicity parameters are extracted and a pitch salience function is proposed. Pitch presence tests are performed utilizing information from the spectral structure of pitch candidates, aiming to suppress errors occurring at multiples and sub-multiples of the true pitches. A novel feature for the estimation of harmonically related pitches is proposed, based on the common amplitude modulation assumption. Experiments are performed on the MAPS database using 8784 piano samples of classical, jazz, and random chords with polyphony levels between 1 and 6. The proposed system is computationally inexpensive, being able to perform multiple-F0 estimation experiments in realtime. Experimental results indicate that the proposed system outperforms state-of-the-art approaches for the aforementioned task in a statistically significant manner. Index Terms: multiple-F0 estimation, resonator timefrequency image, common amplitude modulatio

    Investigating the effect of long-term musical experience on the auditory processing skills of young Maltese adults

    Get PDF
    Learning and practising a musical instrument has recently been thought to ‘train’ the brain into processing sound in a more refined manner.As a result, musicians experiencing consistent exposure to musical practice have been suspected to have superior auditory processing skills. This study aimed to investigate this phenomenon within the Maltese context, by testing two cohorts of young Maltese adults. Participants in the musician cohort experienced consistent musical training throughout their lifetime, while those in the non-musician cohort did not have a history of musical training. A total of 24 Maltese speakers (14 musicians and 10 non-musicians) of ages ranging between 19 and 31 years were tested for Frequency Discrimination (FD), Duration Discrimination (DD), Temporal Resolution (TR) and speech-in-noise recognition. The main outcomes yielded by each cohort were compared and analysed statistically. In comparison to the non-musician cohort, the musicians performed in a slightly better manner throughout testing. Statistical superiority was surprisingly only present in the FD test. Although musicians displayed a degree of superiority in performance on the other tests, differences in mean scores were not statistically significant. The results yielded by this investigation are to a degree coherent with implications of previous research, in that the effect of long-term musical experience on the trained cohort manifested itself in a slight superiority in performance on auditory processing tasks. However, this difference in scoring was not prominent enough to be statistically significant.peer-reviewe

    Multimodal music information processing and retrieval: survey and future challenges

    Full text link
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion, and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years
    • 

    corecore