15 research outputs found

    A Comprehensive Review on Audio based Musical Instrument Recognition: Human-Machine Interaction towards Industry 4.0

    Get PDF
    Over the last two decades, the application of machine technology has shifted from industrial to residential use. Further, advances in hardware and software sectors have led machine technology to its utmost application, the human-machine interaction, a multimodal communication. Multimodal communication refers to the integration of various modalities of information like speech, image, music, gesture, and facial expressions. Music is the non-verbal type of communication that humans often use to express their minds. Thus, Music Information Retrieval (MIR) has become a booming field of research and has gained a lot of interest from the academic community, music industry, and vast multimedia users. The problem in MIR is accessing and retrieving a specific type of music as demanded from the extensive music data. The most inherent problem in MIR is music classification. The essential MIR tasks are artist identification, genre classification, mood classification, music annotation, and instrument recognition. Among these, instrument recognition is a vital sub-task in MIR for various reasons, including retrieval of music information, sound source separation, and automatic music transcription. In recent past years, many researchers have reported different machine learning techniques for musical instrument recognition and proved some of them to be good ones. This article provides a systematic, comprehensive review of the advanced machine learning techniques used for musical instrument recognition. We have stressed on different audio feature descriptors of common choices of classifier learning used for musical instrument recognition. This review article emphasizes on the recent developments in music classification techniques and discusses a few associated future research problems

    A computational approach for identifying assamese folk music instruments

    Get PDF
    The classification of musical instruments by using a computational technique is a very challenging task. The developments in signal-processing and data-mining techniques have made it feasible to analyse the many musical signal characteristics, which is essential for resolving the classification issues in music. In this work, 12 popular Assamese folk music instruments were selected for identification. Twelve musicians played the instruments and audio samples were recorded, different instantaneous features were extracted, and an effort has been made to identify those instruments using three popular classification techniques - Decision Tree Classifier (DTC), Support Vector Machine (SVM), and Linear Discriminant Analysis (LDA). A performance-based comparison was made among the three classifiers. The proposed sets of features enabled the DTC, SVM and LDA models to achieve average accuracy ratings of 86.9%, 90% and 92.2% respectively. Regarding the performances of the three fitted models in identifying instrumental sounds, this study offers a valid comparison

    Optimal feature selection and machine learning for high-level audio classification : a random forests approach

    Get PDF
    Content related information, metadata, and semantics can be extracted from soundtracks of multimedia files. Speech recognition, music information retrieval and environmental sound detection techniques have been developed into a fairly mature technology enabling a final text mining process to obtain semantics for the audio scene. An efficient speech, music and environmental sound classification system, which correctly identify these three types of audio signals and feed them into dedicated recognisers, is a critical pre-processing stage for such a content analysis system. The performance and computational efficiency of such a system is predominately dependent on the selected features. This thesis presents a detailed study to identify the suitable classification features and associate a suitable machine learning technique for the intended classification task. In particular, a systematic feature selection procedure is developed to employ the random forests classifier to rank the features according to their importance and reduces the dimensionality of the feature space accordingly. This new technique avoids the trial-and-error approach used by many authors researchers. The implemented feature selection produces results related to individual classification tasks instead of the commonly used statistical distance criteria based approaches that does not consider the intended classification task, which makes it more suitable for supervised learning with specific purposes. A final collective decision-making stage is employed to combine multiple class detectors patterns into one to produce a single classification result for each input frames. The performance of the proposed feature selection technique has been compared with the techniques proposed by MPEG-7 standard to extract the reduced feature space. The results show a significant improvement in the resulted classification accuracy, at the same time, the feature space is simplified and computational overhead reduced. The proposed feature selection and machine learning technique enable the use of only 30 out of the 47 features without degrading the classification accuracy while the classification accuracy lowered by 1.7% only while just 10 features were utilised. The validation shows good performance also and the last stage of collective decision making was able to improve the classification result even after selecting only a small number of classification features. The work represents a successful attempt to determine audio feature importance and classify the audio contents into speech, music and environmental sound using a selected feature subset. The result shows a high degree of accuracy by utilising the random forests for both feature importance ranking and audio content classification

    Acoustic Based Sketch Recognition

    Get PDF
    Sketch recognition is an active research field, with the goal to automatically recognize hand-drawn diagrams by a computer. The technology enables people to freely interact with digital devices like tablet PCs, Wacoms, and multi-touch screens. These devices are easy to use and have become very popular in market. However, they are still quite costly and need more time to be integrated into existing systems. For example, handwriting recognition systems, while gaining in accuracy and capability, still must rely on users using tablet-PCs to sketch on. As computers get smaller, and smart-phones become more common, our vision is to allow people to sketch using normal pencil and paper and to provide a simple microphone, such as one from their smart-phone, to interpret their writings. Since the only device we need is a single simple microphone, the scope of our work is not limited to common mobile devices, but also can be integrated into many other small devices, such as a ring. In this thesis, we thoroughly investigate this new area, which we call acoustic based sketch recognition, and evaluate the possibilities of using it as a new interaction technique. We focus specifically on building a recognition engine for acoustic sketch recognition. We first propose a dynamic time wrapping algorithm for recognizing isolated sketch sounds using MFCC(Mel-Frequency Cesptral Coefficients). After analyzing its performance limitations, we propose improved dynamic time wrapping algorithms which work on a hybrid basis, using both MFCC and four global features including skewness, kurtosis, curviness and peak location. The proposed approaches provide both robustness and decreased computational cost. Finally, we evaluate our algorithms using acoustic data collected by the participants using a device's built-in microphone. Using our improved algorithm we were able to achieve an accuracy of 90% for a 10 digit gesture set, 87% accuracy for the 26 English characters and over 95% accuracy for a set of seven commonly used gestures

    A General Framework for Visualization of Sound Collections in Musical Interfaces

    Get PDF
    open access articleWhile audio data play an increasingly central role in computer-based music production, interaction with large sound collections in most available music creation and production environments is very often still limited to scrolling long lists of file names. This paper describes a general framework for devising interactive applications based on the content-based visualization of sound collections. The proposed framework allows for a modular combination of different techniques for sound segmentation, analysis, and dimensionality reduction, using the reduced feature space for interactive applications. We analyze several prototypes presented in the literature and describe their limitations. We propose a more general framework that can be used flexibly to devise music creation interfaces. The proposed approach includes several novel contributions with respect to previously used pipelines, such as using unsupervised feature learning, content-based sound icons, and control of the output space layout. We present an implementation of the framework using the SuperCollider computer music language, and three example prototypes demonstrating its use for data-driven music interfaces. Our results demonstrate the potential of unsupervised machine learning and visualization for creative applications in computer music

    Computer-Based Data Processing and Management for Blackfoot Phonetics and Phonology

    Get PDF
    More than half of the 6000 world languages have never been adequately described. We propose to create a database system to automatically capture and manage interested sound clips in Blackfoot (an endangered language spoken in Alberta, Canada, and Montana) for a phonetic and phonological analysis. Taking Blackfoot speeches as input, the system generates a list of audio clips containing a sequence of sounds or certain accent patterns based on research interests. Existing computational linguistic techniques such as information processing and artificial intelligence are extended to tackle issues specific to Blackfoot linguistics, and database techniques are adopted to support better data management and linguistic queries. This project is innovative because application of technology in Native American phonetics and phonology is underdeveloped. It enhances humanity with the digital framework to document and analyze endangered languages and can also benefit the research in other languages

    Classificação de sinais de áudio com ênfase na segmentação do canto dentro de sinais de música baseada em análise harmônica

    Get PDF
    Dissertação (Mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-graduação em Engenharia ElétricaA área de pesquisa conhecida como classificação de sinais de áudio busca realizar a identificação automática das classes de áudio (fala, música, ruído, canto, dentre outras). Inicialmente, o objetivo deste trabalho é apresentar o estado-da-arte nessa área de pesquisa e discutir a sua estrutura padrão de diagrama em blocos. Atenção especial é dada à etapa de extração de parâmetros. Posteriormente, o objetivo do trabalho adquire caráter de inovação científica, concentrando-se no tema específico de segmentação do canto dentro de sinais de música. A abordagem proposta baseia-se na diferença entre o conteúdo harmônico dos sinais de canto e de instrumentos musicais, observadas através de análise visual do espectrograma. Os resultados obtidos são comparados com os de outra técnica proposta na literatura, usando o mesmo banco de dados. Mesmo considerando um método de medida de desempenho mais criterioso, a taxa de acerto obtida situa-se na mesma faixa da técnica usada como comparação, em torno de 80%. Como vantagem, a abordagem aqui proposta apresenta menor complexidade computacional. Adicionalmente, permite discriminar os diferentes tipos de erro envolvidos no processo de segmentação, sugerindo alternativas para reduzi-los, quando possível. Finalmente, a partir do algoritmo proposto, é realizado um primeiro experimento com o objetivo de separar os sinais de canto de instrumentos musicais dentro de um sinal de música. Os resultados subjetivos obtidos indicam que o processo de separação proposto opera satisfatoriamente

    Structure Learning in Audio

    Get PDF
    corecore