22 research outputs found

    Histogram Equalization-Based Features for Speech, Music, and Song Discrimination

    Get PDF
    In this letter, we present a new class of segment-based features for speech, music and song discrimination. These features, called PHEQ (Polynomial-Fit Histogram Equalization), are derived from the nonlinear relationship between the short-term feature distributions computed at segment level and a reference distribution. Results show that PHEQ characteristics outperform short-term features such as Mel Frequency Cepstrum Coefficients (MFCC) and conventional segment-based ones such as MFCC mean and variance. Furthermore, the combination of short-term and PHEQ features significantly improves the performance of the whole system

    Acoustic event detection: SVM-based system and evaluation setup in CLEAR’07

    Get PDF
    In this paper, the Acoustic Event Detection (AED) system developed at the UPC is described, and its results in the CLEAR evaluations carried out in March 2007 are reported. The system uses a set of features composed of frequency-filtered band energies and perceptual features, and it is based on SVM classifiers and multi-microphone decision fusion. Also, the current evaluation setup and, in particular, the two new metrics used in this evaluation are presented.Peer ReviewedPostprint (author’s final draft

    Fuzzy integral based information fusion for classification of highly confusable non-speech sounds

    Get PDF
    Acoustic event classification may help to describe acoustic scenes and contribute to improve the robustness of speech technologies. In this work, fusion of different information sources with the fuzzy integral (FI), and the associated fuzzy measure (FM), are applied to the problem of classifying a small set of highly confusable human non-speech sounds. As FI is a meaningful formalism for combining classifier outputs that can capture interactions among the various sources of information, it shows in our experiments a significantly better performance than that of any single classifier entering the FI fusion module. Actually, that FI decision-level fusion approach shows comparable results to the high-performing SVM feature-level fusion and thus it seems to be a good choice when feature-level fusion is not an option. We have also observed that the importance and the degree of interaction among the various feature types given by the FM can be used for feature selection, and gives a valuable insight into the problem.Peer Reviewe

    Classification of acoustic events using SVM-based clustering schemes

    Get PDF
    Acoustic events produced in controlled environments may carry information useful for perceptually aware interfaces. In this paper we focus on the problem of classifying 16 types of meeting-room acoustic events. First of all, we have defined the events and gathered a sound database. Then, several classifiers based on support vector machines (SVM) are developed using confusion matrix based clustering schemes to deal with the multi-class problem. Also, several sets of acoustic features are defined and used in the classification tests. In the experiments, the developed SVM-based classifiers are compared with an already reported binary tree scheme and with their correlative. Gaussian mixture model (GMM) classifiers. The best results are obtained with a tree SVM-based classifier that may use a different feature set at each node. With it, a 31.5% relative average error reduction is obtained with respect to the best result from a conventional binary tree scheme.Peer Reviewe

    Computer-Based Data Processing and Management for Blackfoot Phonetics and Phonology

    Get PDF
    More than half of the 6000 world languages have never been adequately described. We propose to create a database system to automatically capture and manage interested sound clips in Blackfoot (an endangered language spoken in Alberta, Canada, and Montana) for a phonetic and phonological analysis. Taking Blackfoot speeches as input, the system generates a list of audio clips containing a sequence of sounds or certain accent patterns based on research interests. Existing computational linguistic techniques such as information processing and artificial intelligence are extended to tackle issues specific to Blackfoot linguistics, and database techniques are adopted to support better data management and linguistic queries. This project is innovative because application of technology in Native American phonetics and phonology is underdeveloped. It enhances humanity with the digital framework to document and analyze endangered languages and can also benefit the research in other languages

    Sound Representation and Classification Benchmark for Domestic Robots

    Get PDF
    International audienceWe address the problem of sound representation and classification and present results of a comparative study in the context of a domestic robotic scenario. A dataset of sounds was recorded in realistic conditions (background noise, presence of several sound sources, reverberations, etc.) using the humanoid robot NAO. An extended benchmark is carried out to test a variety of representations combined with several classifiers. We provide results obtained with the annotated dataset and we assess the methods quantitatively on the basis of their classification scores, computation times and memory requirements. The annotated dataset is publicly available at https://team.inria.fr/perception/nard/

    Investigation into the Perceptually Informed Data for Environmental Sound Recognition

    Get PDF
    Environmental sound is rich source of information that can be used to infer contexts. With the rise in ubiquitous computing, the desire of environmental sound recognition is rapidly growing. Primarily, the research aims to recognize the environmental sound using the perceptually informed data. The initial study is concentrated on understanding the current state-of-the-art techniques in environmental sound recognition. Then those researches are evaluated by a critical review of the literature. This study extracts three sets of features: Mel Frequency Cepstral Coefficients, Mel-spectrogram and sound texture statistics. Two kinds machine learning algorithms are cooperated with appropriate sound features. The models are compared with a low-level baseline model. It also presents a performance comparison between each model with the high-level human listeners. The study results in sound texture statistics model performing the best classification by achieving 45.1% of accuracy based on support vector machine with radial basis function kernel. Another Mel-spectrogram model based on Convolutional Neural Network also provided satisfactory results and have received predictive results greater than the benchmark test
    corecore