164 research outputs found

    A target guided subband filter for acoustic event detection in noisy environments using wavelet packets

    Get PDF
    This paper deals with acoustic event detection (AED), such as screams, gunshots, and explosions, in noisy environments. The main aim is to improve the detection performance under adverse conditions with a very low signal-to-noise ratio (SNR). A novel filtering method combined with an energy detector is presented. The wavelet packet transform (WPT) is first used for time-frequency representation of the acoustic signals. The proposed filter in the wavelet packet domain then uses a priori knowledge of the target event and an estimate of noise features to selectively suppress the background noise. It is in fact a content-aware band-pass filter which can automatically pass the frequency bands that are more significant in the target than in the noise. Theoretical analysis shows that the proposed filtering method is capable of enhancing the target content while suppressing the background noise for signals with a low SNR. A condition to increase the probability of correct detection is also obtained. Experiments have been carried out on a large dataset of acoustic events that are contaminated by different types of environmental noise and white noise with varying SNRs. Results show that the proposed method is more robust and better adapted to noise than ordinary energy detectors, and it can work even with an SNR as low as -15 dB. A practical system for real time processing and multi-target detection is also proposed in this work

    A joint separation-classification model for sound event detection of weakly labelled data

    Get PDF
    Source separation (SS) aims to separate individual sources from an audio recording. Sound event detection (SED) aims to detect sound events from an audio recording. We propose a joint separation-classification (JSC) model trained only on weakly labelled audio data, that is, only the tags of an audio recording are known but the time of the events are unknown. First, we propose a separation mapping from the time-frequency (T-F) representation of an audio to the T-F segmentation masks of the audio events. Second, a classification mapping is built from each T-F segmentation mask to the presence probability of each audio event. In the source separation stage, sources of audio events and time of sound events can be obtained from the T-F segmentation masks. The proposed method achieves an equal error rate (EER) of 0.14 in SED, outperforming deep neural network baseline of 0.29. Source separation SDR of 8.08 dB is obtained by using global weighted rank pooling (GWRP) as probability mapping, outperforming the global max pooling (GMP) based probability mapping giving SDR at 0.03 dB. Source code of our work is published.Comment: Accepted by ICASSP 201

    A Hardware Based Audio Event Detection System

    Get PDF
    Audio event detection and analysis is an important tool in many fields, from entertainment to security. Recognition technologies are used daily for parsing voice commands, tagging songs, and real time detection of crimes or other undesirable events. The system described in this work is a hardware based application of an audio detection system, implemented on an FPGA. It allows for the detection and characterization of gunshots and other events, such as breaking glass, by comparing a recorded audio sample to 20+ stored fingerprints in real time. Additionally, it has the ability to record flagged events and supports integration with mesh networks to send alerts

    An ensemble of rejecting classifiers for anomaly detection of audio events

    Get PDF
    Audio analytic systems are receiving an increasing interest in the scientific community, not only as stand alone systems for the automatic detection of abnormal events by the interpretation of the audio track, but also in conjunction with video analytics tools for enforcing the evidence of anomaly detection. In this paper we present an automatic recognizer of a set of abnormal audio events that works by extracting suitable features from the signals obtained by microphones installed into a surveilled area, and by classifying them using two classifiers that operate at different time resolutions. An original aspect of the proposed system is the estimation of the reliability of each response of the individual classifiers. In this way, each classifier is able to reject the samples having an overall reliability below a threshold. This approach allows our system to combine only reliable decisions, so increasing the overall performance of the method. The system has been tested on a large dataset of samples acquired from real world scenarios; the audio classes of interests are represented by gunshot, scream and glass breaking in addition to the background sounds. The preliminary results obtained encourage further research in this direction

    GMM classification of environmental sounds for surveillance applications

    Get PDF
    This thesis describes an audio event detection system which automatically classifies an impulsive audio event as scream, gunshot, broken glasses or barking dogs with every background noise. The classification system uses four parallel Gaussian Mixture Models (GMMs) classifiers each of which decides if the sound belongs to its class or is only noise. Each classifier is trained using different features, chosen from a set of 40 audio features. Simultaneously the system can detect any kind of impulsive sounds using only one feature with very high precision. The classification system is implemented in the Network-Integrated Multimedia Middleware (NMM) for real-time processing and communications with other surveillance applications. In order to validate the proposed detection algorithm, we carried out extensive experiments (both off-line and real-time) on a hand-made set of sounds mixed with ambient noise at different Signal-to-Noise ratios (SNRs). Our results demonstrate that the system is able to guarantee 70\% of accuracy and 90\% of precision at 0 dB SNR, starting from 100\% of both accuracy and precision with clean sounds at 20 dB SNR. Sommario: Questa tesi descrive un sistema di rilevazione di eventi audio che classifica automaticamente un rumore impulsivo come urla, spari, vetri rotti o cani che abbaiano con qualsiasi rumore di sottofondo. Il sistema di classificazione utilizza quattro classificatori in parallelo, costruiti con i Gaussian Mixture Models (GMMs), ciascuno dei quali decide se il suono appartiene alla propria classe o se \`e soltanto rumore. Ogni classificatore \`e addestrato con differenti feature, scelte da un insieme di 40 feature audio. Contemporaneamente il sistema pu\`o rilevare qualsiasi tipo di suoni impulsivi utilizzando una sola feature con una precisione molto elevata. Il sistema di classificazione \`e implementato nel Network-Integrated Multimedia Middleware (NMM) per l'elaborazione in tempo reale e le comunicazioni con altre applicazioni di sorveglianza. Al fine di validare l'algoritmo di rilevazione proposto, sono stati effettuati vari esperimenti (sia off-line sia in tempo reale) su un personale database di suoni, mescolati con rumore ambientale, a diversi rapporti di segnale-rumore (SNR). I nostri risultati dimostrano che il sistema \`e in grado di garantire il 70\% di accuratezza e il 90\% di precisione a 0 dB di SNR, a partire da 100\% di accuratezza e precisione con suoni puliti a 20 dB di SN
    corecore