11,353 research outputs found
Adaptive Multi-Class Audio Classification in Noisy In-Vehicle Environment
With ever-increasing number of car-mounted electric devices and their
complexity, audio classification is increasingly important for the automotive
industry as a fundamental tool for human-device interactions. Existing
approaches for audio classification, however, fall short as the unique and
dynamic audio characteristics of in-vehicle environments are not appropriately
taken into account. In this paper, we develop an audio classification system
that classifies an audio stream into music, speech, speech+music, and noise,
adaptably depending on driving environments including highway, local road,
crowded city, and stopped vehicle. More than 420 minutes of audio data
including various genres of music, speech, speech+music, and noise are
collected from diverse driving environments. The results demonstrate that the
proposed approach improves the average classification accuracy up to 166%, and
64% for speech, and speech+music, respectively, compared with a non-adaptive
approach in our experimental settings
Multimodal person recognition for human-vehicle interaction
Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies
Listening for Sirens: Locating and Classifying Acoustic Alarms in City Scenes
This paper is about alerting acoustic event detection and sound source
localisation in an urban scenario. Specifically, we are interested in spotting
the presence of horns, and sirens of emergency vehicles. In order to obtain a
reliable system able to operate robustly despite the presence of traffic noise,
which can be copious, unstructured and unpredictable, we propose to treat the
spectrograms of incoming stereo signals as images, and apply semantic
segmentation, based on a Unet architecture, to extract the target sound from
the background noise. In a multi-task learning scheme, together with signal
denoising, we perform acoustic event classification to identify the nature of
the alerting sound. Lastly, we use the denoised signals to localise the
acoustic source on the horizon plane, by regressing the direction of arrival of
the sound through a CNN architecture. Our experimental evaluation shows an
average classification rate of 94%, and a median absolute error on the
localisation of 7.5{\deg} when operating on audio frames of 0.5s, and of
2.5{\deg} when operating on frames of 2.5s. The system offers excellent
performance in particularly challenging scenarios, where the noise level is
remarkably high.Comment: 6 pages, 9 figure
Comparing CNN and Human Crafted Features for Human Activity Recognition
Deep learning techniques such as Convolutional
Neural Networks (CNNs) have shown good results in activity
recognition. One of the advantages of using these methods resides
in their ability to generate features automatically. This ability
greatly simplifies the task of feature extraction that usually
requires domain specific knowledge, especially when using big
data where data driven approaches can lead to anti-patterns.
Despite the advantage of this approach, very little work has
been undertaken on analyzing the quality of extracted features,
and more specifically on how model architecture and parameters
affect the ability of those features to separate activity classes
in the final feature space. This work focuses on identifying the
optimal parameters for recognition of simple activities applying
this approach on both signals from inertial and audio sensors.
The paper provides the following contributions: (i) a comparison
of automatically extracted CNN features with gold standard
Human Crafted Features (HCF) is given, (ii) a comprehensive
analysis on how architecture and model parameters affect separation
of target classes in the feature space. Results are evaluated
using publicly available datasets. In particular, we achieved a
93.38% F-Score on the UCI-HAR dataset, using 1D CNNs with
3 convolutional layers and 32 kernel size, and a 90.5% F-Score
on the DCASE 2017 development dataset, simplified for three
classes (indoor, outdoor and vehicle), using 2D CNNs with 2
convolutional layers and a 2x2 kernel size
BaNa: a noise resilient fundamental frequency detection algorithm for speech and music
Fundamental frequency (F0) is one of the essential features in many acoustic related applications. Although numerous F0 detection algorithms have been developed, the detection accuracy in noisy environments still needs improvement. We present a hybrid noise resilient F0 detection algorithm named BaNa that combines the approaches of harmonic ratios and Cepstrum analysis. A Viterbi algorithm with a cost function is used to identify the F0 value among several F0 candidates. Speech and music databases with eight different types of additive noise are used to evaluate the performance of the BaNa algorithm and several classic and state-of-the-art F0 detection algorithms. Results show that for almost all types of noise and signal-to-noise ratio (SNR) values investigated, BaNa achieves the lowest Gross Pitch Error (GPE) rate among all the algorithms. Moreover, for the 0 dB SNR scenarios, the BaNa algorithm is shown to achieve 20% to 35% GPE rate for speech and 12% to 39% GPE rate for music. We also describe implementation issues that must be addressed to run the BaNa algorithm as a real-time application on a smartphone platform.Peer ReviewedPostprint (author's final draft
EMD-based filtering (EMDF) of low-frequency noise for speech enhancement
An Empirical Mode Decomposition based filtering (EMDF) approach is presented as a post-processing stage for speech enhancement. This method is particularly effective in low frequency noise environments. Unlike previous EMD based denoising methods, this approach does not make the assumption that the contaminating noise signal is fractional Gaussian Noise. An adaptive method is developed to select the IMF index for separating the noise components from the speech based on the second-order IMF statistics. The low frequency noise components are then separated by a partial reconstruction from the IMFs. It is shown that the proposed EMDF technique is able to suppress residual noise from speech signals that were enhanced by the conventional optimallymodified log-spectral amplitude approach which uses a minimum statistics based noise estimate. A comparative performance study is included that demonstrates the effectiveness of the EMDF system in various noise environments, such as car interior noise, military vehicle noise and babble noise. In particular, improvements up to 10 dB are obtained in car noise environments. Listening tests were performed that confirm the results
- …