24 research outputs found

    Compression-based Modelling of Musical Similarity Perception

    Get PDF
    Similarity is an important concept in music cognition research since the similarity between (parts of) musical pieces determines perception of stylistic categories and structural relationships between parts of musical works. The purpose of the present research is to develop and test models of musical similarity perception inspired by a transformational approach which conceives of similarity between two perceptual objects in terms of the complexity of the cognitive operations required to transform the representation of the first object into that of the second, a process which has been formulated in informationtheoretic terms. Specifically, computational simulations are developed based on compression distance in which a probabilistic model is trained on one piece of music and then used to predict, or compress, the notes in a second piece. The more predictable the second piece according to the model, the more efficiently it can be encoded and the greater the similarity between the two pieces. The present research extends an existing information-theoretic model of auditory expectation (IDyOM) to compute compression distances varying in symmetry and normalisation using high-level symbolic features representing aspects of pitch and rhythmic structure. Comparing these compression distances with listeners’ similarity ratings between pairs of melodies collected in three experiments demonstrates that the compression-based model provides a good fit to the data and allows the identification of representations, model parameters and compression-based metrics that best account for musical similarity perception. The compression-based model also shows comparable performance to the best-performing algorithms on the MIREX 2005 melodic similarity task

    The German Music@Home: Validation of a questionnaire measuring at home musical exposure and interaction of young children.

    Get PDF
    The present study introduces the German version of the original version of the Music@Home questionnaire developed in the UK, which systematically evaluates musical engagement in the home environment of young children. Two versions are available, an Infant version for children aged three to 23 months and a Preschool version for children aged two to five and a half years. For the present study, the original Music@Home questionnaire was translated from English into German and 656 caregivers completed the questionnaire online. A confirmatory factor analysis showed moderate to high fit indices for both versions, confirming the factor structure of the original questionnaire. Also, the reliability coefficients for the subscales (Parental beliefs, Child engagement with music, Parent initiation of singing, Parent initiation of music-making for the Infant version and Parental beliefs, Child engagement with music, Parent initiation of music behavior and Breadth of musical exposure for the Preschool version) ranged from moderate to high fits. Furthermore, the test-retest analysis (N = 392) revealed high correlations for the general factor and all subscales confirming their internal reliability. Additionally, we included language questionnaires for children of two and three years of age. Results showed that higher scores on the Music@Home questionnaire were moderately associated with better language skills in two-year-olds (N = 118). In sum, the study presents the validated German Music@Home questionnaire, which shows good psychometric properties. The two versions of the questionnaire are available for use in order to assess home musical engagement of young children, which could be of interest in many areas of developmental research

    The Timbre Perception Test (TPT): A new interactive musical assessment tool to measure timbre perception ability

    Get PDF
    To date, tests that measure individual differences in the ability to perceive musical timbre are scarce in the published literature.The lack of such tool limits research on how timbre, a primary attribute of sound, is perceived and processed among individuals.The current paper describes the development of the Timbre Perception Test (TPT), in which participants use a slider to reproduce heard auditory stimuli that vary along three important dimensions of timbre: envelope, spectral flux, and spectral centroid. With a sample of 95 participants, the TPT was calibrated and validated against measures of related abilities and examined for its reliability. The results indicate that a short-version (8 minutes) of the TPT has good explanatory support from a factor analysis model, acceptable internal reliability (α=.69,ωt = .70), good test–retest reliability (r= .79) and substantial correlations with self-reported general musical sophistication (ρ= .63) and pitch discrimination (ρ= .56), as well as somewhat lower correlations with duration discrimination (ρ= .27), and musical instrument discrimination abilities (ρ= .33). Overall, the TPT represents a robust tool to measure an individual’s timbre perception ability. Furthermore, the use of sliders to perform a reproductive task has shown to be an effective approach in threshold testing. The current version of the TPT is openly available for research purposes

    The Musicality of Non-Musicians: An Index for Assessing Musical Sophistication in the General Population

    Get PDF
    Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of ‘musical sophistication’ which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI) to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636). Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement

    The encoding of individual identity in dolphin signature whistles : how much information is needed?

    Get PDF
    Bottlenose dolphins (Tursiops truncatus) produce many vocalisations, including whistles that are unique to the individual producing them. Such “signature whistles” play a role in individual recognition and maintaining group integrity. Previous work has shown that humans can successfully group the spectrographic representations of signature whistles according to the individual dolphins that produced them. However, attempts at using mathematical algorithms to perform a similar task have been less successful. A greater understanding of the encoding of identity information in signature whistles is important for assessing similarity of whistles and thus social influences on the development of these learned calls. We re-examined 400 signature whistles from 20 individual dolphins used in a previous study, and tested the performance of new mathematical algorithms. We compared the measure used in the original study (correlation matrix of evenly sampled frequency measurements) to one used in several previous studies (similarity matrix of time-warped whistles), and to a new algorithm based on the Parsons code, used in music retrieval databases. The Parsons code records the direction of frequency change at each time step, and is effective at capturing human perception of music. We analysed similarity matrices from each of these three techniques, as well as a random control, by unsupervised clustering using three separate techniques: k-means clustering, hierarchical clustering, and an adaptive resonance theory neural network. For each of the three clustering techniques, a seven-level Parsons algorithm provided better clustering than the correlation and dynamic time warping algorithms, and was closer to the near-perfect visual categorisations of human judges. Thus, the Parsons code captures much of the individual identity information present in signature whistles, and may prove useful in studies requiring quantification of whistle similarity.Publisher PDFPeer reviewe

    The Perception of Accents in Pop Music Melodies

    No full text
    We examine several theoretical and empirical approaches to melodic accent perception and propose a heuristic classification system of formalized accent rules. To evaluate the validity of the accent rules a listening experiment was carried out. 29 participants had to rate every note of 15 pop music melodies presented as audio excerpts and as monophonic MIDI renditions for their perceived accent strength on a rating scale. The ratings were compared to accent predictions from 38 formalized, mainly binary accent rules. Two statistical procedures (logistic regression, and regression trees) were subsequently used in a data mining approach to determine a model consisting of an optimally weighted combination of smaller rule subset to predict the accents votes of the participants. Model evaluation on a set of unseen melodies indicates a very good predictive performance of both statistical models for the participants' votes obtained for the MIDI renditions. The two models derived for the audio data perform less well but still at an acceptable level. An analysis of the model components shows that Gestalt rules covering several different aspects of a monophonic melody are of importance for human accent perception. Among the aspects covered by both models are pitch interval structure, pitch contour, note duration, metrical position, as well as the position of a note within a phrase. In contrast, both audio models incorporate mainly rules relating to metre and syncopations. Potential applications of the presented accent models in automatic music analysis as well as options for future research following this computational approach are discussed

    Melodic contour and mid-level global features applied to the analysis of flamenco cantes

    No full text
    This work focuses on the topic of melodic characterization and similarity in a specific musical repertoire: a cappella flamenco singing, more specifically in debla and martinete styles. We propose the combination of manual and automatic description. First, we use a state-of-the-art automatic transcription method to account for general melodic similarity from music recordings. Second, we define a specific set of representative mid-level melodic features, which are manually labelled by flamenco experts. Both approaches are then contrasted and combined into a global similarity measure. This similarity measure is assessed by inspecting the clusters obtained through phylogenetic algorithms and by relating similarity to categorization in terms of style. Finally, we discuss the advantage of combining automatic and expert annotations as well as the need to include repertoire-specific descriptions for meaningful melodic characterization in traditional music collections.This article has been funded by the Andalusian Government under a Proyecto de Excelencia research project with reference number P12-TIC-1362
    corecore