2 research outputs found

    Random forest classification based acoustic event detection utilizing contextual-information and bottleneck features

    Get PDF
    The variety of event categories and event boundary information have resulted in limited success for acoustic event detection systems. To deal with this, we propose to utilize the long contextual information, low-dimensional discriminant global bottleneck features and category-specific bottleneck features. By concatenating several adjacent frames together, the use of contextual information makes it easier to cope with acoustic signals with long duration. Global and category-specific bottleneck features can extract the prior knowledge of the event category and boundary, which is ideally matched by the task of an event detection system. Evaluations on the UPC-TALP and ITC-IRST databases of highly variable acoustic events demonstrate the effectiveness of the proposed approaches by achieving a 5.30% and 4.44% absolute error rate improvement respectively compared to the state of art technique

    Real-Time Monophonic and Polyphonic Audio Classification from Power Spectra

    Get PDF
    International audienceThis work addresses the recurring challenge of real-time monophonic and polyphonic audio source classification. The whole normalized power spectrum (NPS) is directly involved in the proposed process, avoiding complex and hazardous traditional feature extraction. It is also a natural candidate for polyphonic events thanks to its additive property in such cases. The classification task is performed through a nonparametric kernel-based generative modeling of the power spectrum. Advantage of this model is twofold: it is almost hypothesis free and it allows to straightforwardly obtain the maximum a posteriori classification rule of online signals. Moreover it makes use of the monophonic dataset to build the polyphonic one. Then, to reach the real-time target, the complexity of the method can be tuned by using a standard hierarchical clustering preprocessing of the prototypes, revealing a particularly efficient computation time and classification accuracy trade-off. The proposed method, called RARE (for Real-time Audio Recognition Engine) reveals encouraging results both in monophonic and polyphonic classification tasks on benchmark and owned datasets, including also the targeted real-time situation. In particular, this method benefits from several advantages compared to the state-of-the-art methods including a reduced training time, no feature extraction, the ability to control the computation - accuracy trade-off and no training on already mixed sounds for polyphonic classification
    corecore