5 research outputs found

    The GTZAN dataset: Its contents, its faults, their effects on evaluation, and its future use

    Get PDF
    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN, but to use it with consideration of its contents.Comment: 29 pages, 7 figures, 6 tables, 128 reference

    Recognizing Patterns of Music Signals to Songs Classification Using Modified AIS-Based Classifier

    Get PDF
    Human capabilities of recognizing different type of music and grouping them into categories of genre are so remarkable that experts in music can perform such classification using their hearing senses and logical judgment. For decades now, the scientific community were involved in research to automate the human process of recognizing genre of songs. These efforts would normally imitate the human method of recognizing the music by considering every essential component of the songs from artist voice, melody of the music through to the type of instruments used. As a result, various approaches or mechanisms are introduced and developed to automate the classification process. The results of these studies so far have been remarkable yet can still be improved. The aim of this research is to investigate Artificial Immune System (AIS) domain by focusing on the modified AIS-based classifier to solve this problem where the focuses are the censoring and monitoring modules. In this highlight, stages of music recognition are emphasized where feature extraction, feature selection, and feature classification processes are explained. Comparison of performances between proposed classifier and WEKA application is discussed

    A Bio-Inspired Music Genre Classification Framework using Modified AIS-Based Classifier

    Get PDF
    For decades now, scientific community are involved in various works to automate the human process of recognizing different types of music using different elements for example different instruments used. These efforts would imitate the human method of recognizing the music by considering every essential component of the songs from artist voice, melody of the music through to the type of instruments used. Various approaches or mechanisms are introduced and developed to automate the classification process since then. The results of these studies so far have been remarkable yet can still be improved. The aim of this research is to investigate Artificial Immune System (AIS) domain by focusing on the modified AIS-based classifier to solve this problem where the focuses are the censoring and monitoring modules. In this highlight, stages of music recognition are emphasized where feature extraction, feature selection, and feature classification processes are explained. Comparison of performances between proposed classifier and WEKA application is discussed. Almost 20 to 30 percent of classification accuracies are increased in this study

    A Survey of Evaluation in Music Genre Recognition

    Get PDF

    On Efficient Music Genre Classification

    No full text
    corecore