2,779 research outputs found

    The GTZAN dataset: Its contents, its faults, their effects on evaluation, and its future use

    Get PDF
    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN, but to use it with consideration of its contents.Comment: 29 pages, 7 figures, 6 tables, 128 reference

    Playing Technique Recognition by Joint Time–Frequency Scattering

    Get PDF
    Playing techniques are important expressive elements in music signals. In this paper, we propose a recognition system based on the joint time–frequency scattering transform (jTFST) for pitch evolution-based playing techniques (PETs), a group of playing techniques with monotonic pitch changes over time. The jTFST represents spectro-temporal patterns in the time–frequency domain, capturing discriminative information of PETs. As a case study, we analyse three commonly used PETs of the Chinese bamboo flute: acciacatura, portamento, and glissando, and encode their characteristics using the jTFST. To verify the proposed approach, we create a new dataset, the CBF-petsDB, containing PETs played in isolation as well as in the context of whole pieces performed and annotated by professional players. Feeding the jTFST to a machine learning classifier, we obtain F-measures of 71% for acciacatura, 59% for portamento, and 83% for glissando detection, and provide explanatory visualisations of scattering coefficients for each technique

    Proceedings of the 6th International Workshop on Folk Music Analysis, 15-17 June, 2016

    Get PDF
    The Folk Music Analysis Workshop brings together computational music analysis and ethnomusicology. Both symbolic and audio representations of music are considered, with a broad range of scientific approaches being applied (signal processing, graph theory, deep learning). The workshop features a range of interesting talks from international researchers in areas such as Indian classical music, Iranian singing, Ottoman-Turkish Makam music scores, Flamenco singing, Irish traditional music, Georgian traditional music and Dutch folk songs. Invited guest speakers were Anja Volk, Utrecht University and Peter Browne, Technological University Dublin

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Measuring musics:Notes on modes, motifs, and melodies

    Get PDF
    This dissertation develops computational methods to measure properties of musical traditions, with the aim of comparing them. It analyzes sheet music from a range of traditions, to which end two corpora of Western plainchant are introduced (Cantus Corpus and GregoBase Corpus). These corpora are used to confirm the melodic arch hypothesis, explore regularity in antiphon-differentia connections, and compose artificial chant using a recurrent neural language model. The central chant study, however, proposes a distributional approach to mode classification that can still determine mode fairly accurately even when all pitch information has been discarded. However, this seems to work best when the chants are segmented into ‘natural units’ corresponding to textual units such as syllables and words. Breaking down music into smaller units, or motifs, is the second theme in this dissertation. It is shown how rhythmic motifs can be used to effectively visualize rhythmic data, from music and animal vocalizations, in a rhythm triangle, an idea that is also extended to melodic data. The third theme concerns the shapes of melodies. The dissertation introduces Cosine Contours: a continuous representation for melodic contour, motivated by the observation that the principal components of melodic datasets approximate cosines. A second study on contour suggests that it should indeed be considered a continuous phenomenon, unlike several previous studies, as no evidence is found that contours cluster in distinct types. The dissertation ends with a case-study that applies a formal analysis to the ‘formal’ music of Arvo Pärt by reconstructing almost the entire score of ‘Summa’ using formal procedures

    Adaptive Time–Frequency Scattering for Periodic Modulation Recognition in Music Signals

    Get PDF
    Vibratos, tremolos, trills, and flutter-tongue are techniques frequently found in vocal and instrumental music. A common feature of these techniques is the periodic modulation in the time--frequency domain. We propose a representation based on time--frequency scattering to model the inter-class variability for fine discrimination of these periodic modulations. Time--frequency scattering is an instance of the scattering transform, an approach for building invariant, stable, and informative signal representations. The proposed representation is calculated around the wavelet subband of maximal acoustic energy, rather than over all the wavelet bands. To demonstrate the feasibility of this approach, we build a system that computes the representation as input to a machine learning classifier. Whereas previously published datasets for playing technique analysis focus primarily on techniques recorded in isolation, for ecological validity, we create a new dataset to evaluate the system. The dataset, named CBF-periDB, contains full-length expert performances on the Chinese bamboo flute that have been thoroughly annotated by the players themselves. We report F-measures of 99% for flutter-tongue, 82% for trill, 69% for vibrato, and 51% for tremolo detection, and provide explanatory visualisations of scattering coefficients for each of these techniques

    CICHMKG: a large-scale and comprehensive Chinese intangible cultural heritage multimodal knowledge graph

    Get PDF
    Intangible Cultural Heritage (ICH) witnesses human creativity and wisdom in long histories, composed of a variety of immaterial manifestations. The rapid development of digital technologies accelerates the record of ICH, generating a sheer number of heterogenous data but in a state of fragmentation. To resolve that, existing studies mainly adopt approaches of knowledge graphs (KGs) which can provide rich knowledge representation. However, most KGs are text-based and text-derived, and incapable to give related images and empower downstream multimodal tasks, which is also unbeneficial for the public to establish the visual perception and comprehend ICH completely especially when they do not have the related ICH knowledge. Hence, aimed at that, we propose to, taking the Chinese nation-level ICH list as an example, construct a large-scale and comprehensive Multimodal Knowledge Graph (CICHMKG) combining text and image entities from multiple data sources and give a practical construction framework. Additionally, in this paper, to select representative images for ICH entities, we propose a method composed of the denoising algorithm (CNIFA) and a series of criteria, utilizing global and local visual features of images and textual features of captions. Extensive empirical experiments demonstrate its effectiveness. Lastly, we construct the CICHMKG, consisting of 1,774,005 triples, and visualize it to facilitate the interactions and help the public dive into ICH deeply

    Computational modeling of improvisation in Turkish folk music using Variable-Length Markov Models

    Get PDF
    The thesis describes a new database of uzun havas, a non-metered structured improvisation form in Turkish folk music, and a system, which uses Variable-Length Markov Models (VLMMs) to predict the melody in the uzun hava form. The database consists of 77 songs, encompassing 10849 notes, and it is used to train multiple viewpoints, where each event in a musical sequence are represented by parallel descriptors such as Durations and Notes. The thesis also introduces pitch-related viewpoints that are specifically aimed to model the unique melodic properties of makam music. The predictability of the system is quantitatively evaluated by an entropy based scheme. In the experiments, the results from the pitch-related viewpoints mapping 12-tone-scale of Western classical theory and 17 tone-scale of Turkish folk music are compared. It is shown that VLMMs are highly predictive in the note progressions of the transcriptions of uzun havas. This suggests that VLMMs may be applied to makam-based and non-metered musical forms, in addition to Western musical styles. To the best of knowledge, the work presents the first symbolic, machine-readable database and the first application of computational modeling in Turkish folk music.MSCommittee Chair: Parag Chordia; Committee Member: Gil Weinberg; Committee Member: Jason Freema
    corecore