6 research outputs found

    Real-time Audio Classification based on Mixture Models

    Get PDF
    International audienceThis is the poster related to the conference paper "A mixture model-based real-time audio sources classification method", Baelde et al, ICASSP 2017 (hal-01420677v2)

    Classification de signaux audio en temps-réel par un modèle de mélanges d'histogrammes

    Get PDF
    International audienceAudio recognition consists in giving a label to an unknown audio signal. It relies on audio descriptors and machine learning algorithms. However, in a real-time context with heterogeneous sounds, the current models lack of performance to classify sounds. This article presents a novel method based on a model of histogram mixture representing audio spectra. The recognition consists in computing the probability of each group and aggregate them temporally. A reduction step of the models allows also to perform this algorithm in real-time. This method outperforms current state-of-the-art algorithms, and achieves an accuracy of 96,7% on a database of 50 classes, using only 0.5s of audio data.La reconnaissance sonore consiste à attribuer un label à un signal audio inconnu. Celle-ci repose généralement sur des descripteurs audio ainsi que des modèles d'apprentissage statistique. Néanmoins les modèles actuels peinent à bien classer les sons dans un contexte temps-réel où ces derniers sont hétérogènes. Ce papier propose une nouvelle méthode basée sur un modèle de mélanges d'histogrammes représentant les spectres audio. La reconnaissance consiste à calculer la probabilité de chaque groupe puis à les agréger temporellement. Une étape de réduction du précédent modèle permet par ailleurs de passer au temps-réel. Cette méthode surpasse les algorithmes actuels, et peut atteindre 96,7% de bonne classification sur une base de 50 classes de sons en utilisant 0,5s de données audio

    Real-Time Monophonic and Polyphonic Audio Classification from Power Spectra

    Get PDF
    International audienceThis work addresses the recurring challenge of real-time monophonic and polyphonic audio source classification. The whole normalized power spectrum (NPS) is directly involved in the proposed process, avoiding complex and hazardous traditional feature extraction. It is also a natural candidate for polyphonic events thanks to its additive property in such cases. The classification task is performed through a nonparametric kernel-based generative modeling of the power spectrum. Advantage of this model is twofold: it is almost hypothesis free and it allows to straightforwardly obtain the maximum a posteriori classification rule of online signals. Moreover it makes use of the monophonic dataset to build the polyphonic one. Then, to reach the real-time target, the complexity of the method can be tuned by using a standard hierarchical clustering preprocessing of the prototypes, revealing a particularly efficient computation time and classification accuracy trade-off. The proposed method, called RARE (for Real-time Audio Recognition Engine) reveals encouraging results both in monophonic and polyphonic classification tasks on benchmark and owned datasets, including also the targeted real-time situation. In particular, this method benefits from several advantages compared to the state-of-the-art methods including a reduced training time, no feature extraction, the ability to control the computation - accuracy trade-off and no training on already mixed sounds for polyphonic classification

    A mixture model-based real-time audio sources classification method

    Get PDF
    International audienceRecent research on machine learning focuses on audio source identification in complex environments. They rely on extracting features from audio signals and use machine learning techniques to model the sound classes. However, such techniques are often not optimized for a real-time implementation and in multi-source conditions. We propose a new real-time audio single-source classification method based on a dictionary of sound models (that can be extended to a multi-source setting). The sound spectrums are modeled with mixture models and form a dictionary. The classification is based on a comparison with all the elements of the dictionary by computing likelihoods and the best match is used as a result. We found that this technique outperforms classic methods within a temporal horizon of 0.5s per decision (achieved 6% of errors on a database composed of 50 classes). Future works will focus on the multi-sources classification and reduce the computational load

    Multi-Sensory Interaction for Blind and Visually Impaired People

    Get PDF
    This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye

    Real-time Audio Classification based on Mixture Models

    No full text
    International audienceThis is the poster related to the conference paper "A mixture model-based real-time audio sources classification method", Baelde et al, ICASSP 2017 (hal-01420677v2)
    corecore