233 research outputs found

    A Novel Techniques for Classification of Musical Instruments

    Get PDF
    Musical instrument classification provides a framework for developing and evaluating features for any type of content-based analysis of musical signals. Signal is subjected to wavelet decomposition. A suitable wavelet is selected for decomposition. In our work for decomposition we used Wavelet Packet transform. After the wavelet decomposition, some sub band signals can be analyzed, particular band can be representing the particular characteristics of musical signal. Finally these wavelet features set were formed and then musical instrument will be classified by using suitable machine learning algorithm (classifier). In this paper, the problem of classifying of musical instruments is addressed.  We propose a new musical instrument classification method based on wavelet represents both local and global information by computing wavelet coefficients at different frequency sub bands with different resolutions. Using wavelet packet transform (WPT) along with advanced machine learning techniques, accuracy of music instrument classification has been significantly improved. Keywords: Musical instrument classification, WPT, Feature Extraction Techniques, Machine learning techniques

    Convolutional Methods for Music Analysis

    Get PDF

    A Survey of Evaluation in Music Genre Recognition

    Get PDF

    A survey on artificial intelligence-based acoustic source identification

    Get PDF
    The concept of Acoustic Source Identification (ASI), which refers to the process of identifying noise sources has attracted increasing attention in recent years. The ASI technology can be used for surveillance, monitoring, and maintenance applications in a wide range of sectors, such as defence, manufacturing, healthcare, and agriculture. Acoustic signature analysis and pattern recognition remain the core technologies for noise source identification. Manual identification of acoustic signatures, however, has become increasingly challenging as dataset sizes grow. As a result, the use of Artificial Intelligence (AI) techniques for identifying noise sources has become increasingly relevant and useful. In this paper, we provide a comprehensive review of AI-based acoustic source identification techniques. We analyze the strengths and weaknesses of AI-based ASI processes and associated methods proposed by researchers in the literature. Additionally, we did a detailed survey of ASI applications in machinery, underwater applications, environment/event source recognition, healthcare, and other fields. We also highlight relevant research directions

    Pre-trained Deep Neural Network using Sparse Autoencoders and Scattering Wavelet Transform for Musical Genre Recognition

    Get PDF
    Research described in this paper tries to combine the approach of Deep Neural Networks (DNN) with the novel audio features extracted using the Scattering Wavelet Transform (SWT) for classifying musical genres. The SWT uses a sequence of Wavelet Transforms to compute the modulation spectrum coefficients of multiple orders, which has already shown to be promising for this task. The DNN in this work uses pre-trained layers using Sparse Autoencoders (SAE). Data obtained from the Creative Commons website jamendo.com is used to boost the well-known GTZAN database, which is a standard benchmark for this task. The final classifier is tested using a 10-fold cross validation to achieve results similar to other state-of-the-art approaches

    A hybrid feature pool-based emotional stress state detection algorithm using EEG signals.

    Get PDF
    Human stress analysis using electroencephalogram (EEG) signals requires a detailed and domain‐specific information pool to develop an effective machine learning model. In this study, a multi‐domain hybrid feature pool is designed to identify most of the important information from the signal. The hybrid feature pool contains features from two types of analysis: (a) statistical parametric analysis from the time domain, and (b) wavelet‐based bandwidth specific feature analysis from the time‐frequency domain. Then, a wrapper‐based feature selector, Boruta, is applied for ranking all the relevant features from that feature pool instead of considering only the nonredundant features. Finally, the k‐nearest neighbor (k‐NN) algorithm is used for final classification. The proposed model yields an overall accuracy of 73.38% for the total considered dataset. To validate the performance of the proposed model and highlight the necessity of designing a hybrid feature pool, the model was compared to non‐linear dimensionality reduction techniques, as well as those without feature ranking

    Simplified inverse filter tracked affective acoustic signals classification incorporating deep convolutional neural networks

    Get PDF
    Facial expressions, verbal, behavioral, such as limb movements, and physiological features are vital ways for affective human interactions. Researchers have given machines the ability to recognize affective communication through the above modalities in the past decades. In addition to facial expressions, changes in the level of sound, strength, weakness, and turbulence will also convey affective. Extracting affective feature parameters from the acoustic signals have been widely applied in customer service, education, and the medical field. In this research, an improved AlexNet-based deep convolutional neural network (A-DCNN) is presented for acoustic signal recognition. Firstly, preprocessed on signals using simplified inverse filter tracking (SIFT) and short-time Fourier transform (STFT), Mel frequency Cepstrum (MFCC) and waveform-based segmentation were deployed to create the input for the deep neural network (DNN), which was applied widely in signals preprocess for most neural networks. Secondly, acoustic signals were acquired from the public Ryerson Audio-Visual Database of Affective Speech and Song (RAVDESS) affective speech audio system. Through the acoustic signal preprocessing tools, the basic features of the kind of sound signals were calculated and extracted. The proposed DNN based on improved AlexNet has a 95.88% accuracy on classifying eight affective of acoustic signals. By comparing with some linear classifications, such as decision table (DT) and Bayesian inference (BI) and other deep neural networks, such as AlexNet+SVM, recurrent convolutional neural network (R-CNN), etc., the proposed method achieves high effectiveness on the accuracy (A), sensitivity (S1), positive predictive (PP), and f1-score (F1). Acoustic signals affective recognition and classification can be potentially applied in industrial product design through measuring consumers’ affective responses to products; by collecting relevant affective sound data to understand the popularity of the product, and furthermore, to improve the product design and increase the market responsiveness
    corecore