664 research outputs found

    Music information retrieval: conceptuel framework, annotation and user behaviour

    Get PDF
    Understanding music is a process both based on and influenced by the knowledge and experience of the listener. Although content-based music retrieval has been given increasing attention in recent years, much of the research still focuses on bottom-up retrieval techniques. In order to make a music information retrieval system appealing and useful to the user, more effort should be spent on constructing systems that both operate directly on the encoding of the physical energy of music and are flexible with respect to users’ experiences. This thesis is based on a user-centred approach, taking into account the mutual relationship between music as an acoustic phenomenon and as an expressive phenomenon. The issues it addresses are: the lack of a conceptual framework, the shortage of annotated musical audio databases, the lack of understanding of the behaviour of system users and shortage of user-dependent knowledge with respect to high-level features of music. In the theoretical part of this thesis, a conceptual framework for content-based music information retrieval is defined. The proposed conceptual framework - the first of its kind - is conceived as a coordinating structure between the automatic description of low-level music content, and the description of high-level content by the system users. A general framework for the manual annotation of musical audio is outlined as well. A new methodology for the manual annotation of musical audio is introduced and tested in case studies. The results from these studies show that manually annotated music files can be of great help in the development of accurate analysis tools for music information retrieval. Empirical investigation is the foundation on which the aforementioned theoretical framework is built. Two elaborate studies involving different experimental issues are presented. In the first study, elements of signification related to spontaneous user behaviour are clarified. In the second study, a global profile of music information retrieval system users is given and their description of high-level content is discussed. This study has uncovered relationships between the users’ demographical background and their perception of expressive and structural features of music. Such a multi-level approach is exceptional as it included a large sample of the population of real users of interactive music systems. Tests have shown that the findings of this study are representative of the targeted population. Finally, the multi-purpose material provided by the theoretical background and the results from empirical investigations are put into practice in three music information retrieval applications: a prototype of a user interface based on a taxonomy, an annotated database of experimental findings and a prototype semantic user recommender system. Results are presented and discussed for all methods used. They show that, if reliably generated, the use of knowledge on users can significantly improve the quality of music content analysis. This thesis demonstrates that an informed knowledge of human approaches to music information retrieval provides valuable insights, which may be of particular assistance in the development of user-friendly, content-based access to digital music collections

    Methodological considerations concerning manual annotation of musical audio in function of algorithm development

    Get PDF
    In research on musical audio-mining, annotated music databases are needed which allow the development of computational tools that extract from the musical audiostream the kind of high-level content that users can deal with in Music Information Retrieval (MIR) contexts. The notion of musical content, and therefore the notion of annotation, is ill-defined, however, both in the syntactic and semantic sense. As a consequence, annotation has been approached from a variety of perspectives (but mainly linguistic-symbolic oriented), and a general methodology is lacking. This paper is a step towards the definition of a general framework for manual annotation of musical audio in function of a computational approach to musical audio-mining that is based on algorithms that learn from annotated data. 1

    Towards the Use of Similarity Distances to Music Genre Classification: a Comparative Study

    Get PDF
    Music genre classification is a challenging research concept, for which open questions remain regarding classification approach, music piece representation, distances between/within genres, and so on. In this paper an investigation on the classification of generated music pieces is performed, based on the idea that grouping close related known pieces in different sets -or clusters- and then generating in an automatic way a new song which is somehow "inspired" in each set, the new song would be more likely to be classified as belonging to the set which inspired it, based on the same distance used to separate the clusters. Different music pieces representations and distances among pieces are used; obtained results are promising, and indicate the appropriateness of the used approach even in a such a subjective area as music genre classification is.This work was supported by IT900-16 Research Team from the Basque Government

    Music Information Retrieval for Irish Traditional Music Automatic Analysis of Harmonic, Rhythmic, and Melodic Features for Efficient Key-Invariant Tune Recognition

    Get PDF
    Music making and listening practices increasingly rely on techno logy,and,asaconsequence,techniquesdevelopedinmusicinformation retrieval (MIR) research are more readily available to end users, in par ticular via online tools and smartphone apps. However, the majority of MIRresearchfocusesonWesternpopandclassicalmusic,andthusdoes not address specificities of other musical idioms. Irishtraditionalmusic(ITM)ispopularacrosstheglobe,withregular sessionsorganisedonallcontinents. ITMisadistinctivemusicalidiom, particularly in terms of heterophony and modality, and these character istics can constitute challenges for existing MIR algorithms. The bene fitsofdevelopingMIRmethodsspecificallytailoredtoITMisevidenced by Tunepal, a query-by-playing tool that has become popular among ITM practitioners since its release in 2009. As of today, Tunepal is the state of the art for tune recognition in ITM. The research in this thesis addresses existing limitations of Tunepal. The main goal is to find solutions to add key-invariance to the tune re cognitionsystem,animportantfeaturethatiscurrentlymissinginTune pal. Techniques from digital signal processing and machine learning are used and adapted to the specificities of ITM to extract harmonic iv and temporal features, respectively with improvements on existing key detection methods, and a novel method for rhythm classification. These featuresarethenusedtodevelopakey-invarianttunerecognitionsystem that is computationally efficient while maintaining retrieval accuracy to a comparable level to that of the existing system

    Proceedings of the 6th International Workshop on Folk Music Analysis, 15-17 June, 2016

    Get PDF
    The Folk Music Analysis Workshop brings together computational music analysis and ethnomusicology. Both symbolic and audio representations of music are considered, with a broad range of scientific approaches being applied (signal processing, graph theory, deep learning). The workshop features a range of interesting talks from international researchers in areas such as Indian classical music, Iranian singing, Ottoman-Turkish Makam music scores, Flamenco singing, Irish traditional music, Georgian traditional music and Dutch folk songs. Invited guest speakers were Anja Volk, Utrecht University and Peter Browne, Technological University Dublin

    Evaluation and combination of pitch estimation methods for melody extraction in symphonic classical music

    Get PDF
    The extraction of pitch information is arguably one of the most important tasks in automatic music description systems. However, previous research and evaluation datasets dealing with pitch estimation focused on relatively limited kinds of musical data. This work aims to broaden this scope by addressing symphonic western classical music recordings, focusing on pitch estimation for melody extraction. This material is characterised by a high number of overlapping sources, and by the fact that the melody may be played by different instrumental sections, often alternating within an excerpt. We evaluate the performance of eleven state-of-the-art pitch salience functions, multipitch estimation and melody extraction algorithms when determining the sequence of pitches corresponding to the main melody in a varied set of pieces. An important contribution of the present study is the proposed evaluation framework, including the annotation methodology, generated dataset and evaluation metrics. The results show that the assumptions made by certain methods hold better than others when dealing with this type of music signals, leading to a better performance. Additionally, we propose a simple method for combining the output of several algorithms, with promising results

    From heuristics-based to data-driven audio melody extraction

    Get PDF
    The identification of the melody from a music recording is a relatively easy task for humans, but very challenging for computational systems. This task is known as "audio melody extraction", more formally defined as the automatic estimation of the pitch sequence of the melody directly from the audio signal of a polyphonic music recording. This thesis investigates the benefits of exploiting knowledge automatically derived from data for audio melody extraction, by combining digital signal processing and machine learning methods. We extend the scope of melody extraction research by working with a varied dataset and multiple definitions of melody. We first present an overview of the state of the art, and perform an evaluation focused on a novel symphonic music dataset. We then propose melody extraction methods based on a source-filter model and pitch contour characterisation and evaluate them on a wide range of music genres. Finally, we explore novel timbre, tonal and spatial features for contour characterisation, and propose a method for estimating multiple melodic lines. The combination of supervised and unsupervised approaches leads to advancements on melody extraction and shows a promising path for future research and applications

    Effort in gestural interactions with imaginary objects in Hindustani Dhrupad vocal music

    Get PDF
    Physical effort has often been regarded as a key factor of expressivity in music performance. Nevertheless, systematic experimental approaches to the subject have been rare. In North Indian classical (Hindustani) vocal music, singers often engage with melodic ideas during improvisation by manipulating intangible, imaginary objects with their hands, such as through stretching, pulling, pushing, throwing etc. The above observation suggests that some patterns of change in acoustic features allude to interactions that real objects through their physical properties can afford. The present study reports on the exploration of the relationships between movement and sound by accounting for the physical effort that such interactions require in the Dhrupad genre of Hindustani vocal improvisation. The work follows a mixed methodological approach, combining qualitative and quantitative methods to analyse interviews, audio-visual material and movement data. Findings indicate that despite the flexibility in the way a Dhrupad vocalist might use his/her hands while singing, there is a certain degree of consistency by which performers associate effort levels with melody and types of gestural interactions with imaginary objects. However, different schemes of cross-modal associations are revealed for the vocalists analysed, that depend on the pitch space organisation of each particular melodic mode (rāga), the mechanical requirements of voice production, the macro-structure of the ālāp improvisation and morphological cross-domain analogies. Results further suggest that a good part of the variance in both physical effort and gesture type can be explained through a small set of sound and movement features. Based on the findings, I argue that gesturing in Dhrupad singing is guided by: the know-how of humans in interacting with and exerting effort on real objects of the environment, the movement–sound relationships transmitted from teacher to student in the oral music training context and the mechanical demands of vocalisation
    corecore