2,438 research outputs found
Multimodal Polynomial Fusion for Detecting Driver Distraction
Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone.
Although there has been a considerable amount of research on modeling the
distracted behavior of drivers under various conditions, accurate automatic
detection using multiple modalities and especially the contribution of using
the speech modality to improve accuracy has received little attention. This
paper introduces a new multimodal dataset for distracted driving behavior and
discusses automatic distraction detection using features from three modalities:
facial expression, speech and car signals. Detailed multimodal feature analysis
shows that adding more modalities monotonically increases the predictive
accuracy of the model. Finally, a simple and effective multimodal fusion
technique using a polynomial fusion layer shows superior distraction detection
results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201
NAS-VAD: Neural Architecture Search for Voice Activity Detection
Various neural network-based approaches have been proposed for more robust
and accurate voice activity detection (VAD). Manual design of such neural
architectures is an error-prone and time-consuming process, which prompted the
development of neural architecture search (NAS) that automatically design and
optimize network architectures. While NAS has been successfully applied to
improve performance in a variety of tasks, it has not yet been exploited in the
VAD domain. In this paper, we present the first work that utilizes NAS
approaches on the VAD task. To effectively search architectures for the VAD
task, we propose a modified macro structure and a new search space with a much
broader range of operations that includes attention operations. The results
show that the network structures found by the propose NAS framework outperform
previous manually designed state-of-the-art VAD models in various noise-added
and real-world-recorded datasets. We also show that the architectures searched
on a particular dataset achieve improved generalization performance on unseen
audio datasets. Our code and models are available at
https://github.com/daniel03c1/NAS_VAD.Comment: Submitted to Interspeech 202
Efficient speech detection in environmental audio using acoustic recognition and knowledge distillation
The ongoing biodiversity crisis, driven by factors such as land-use change
and global warming, emphasizes the need for effective ecological monitoring
methods. Acoustic monitoring of biodiversity has emerged as an important
monitoring tool. Detecting human voices in soundscape monitoring projects is
useful both for analysing human disturbance and for privacy filtering. Despite
significant strides in deep learning in recent years, the deployment of large
neural networks on compact devices poses challenges due to memory and latency
constraints. Our approach focuses on leveraging knowledge distillation
techniques to design efficient, lightweight student models for speech detection
in bioacoustics. In particular, we employed the MobileNetV3-Small-Pi model to
create compact yet effective student architectures to compare against the
larger EcoVAD teacher model, a well-regarded voice detection architecture in
eco-acoustic monitoring. The comparative analysis included examining various
configurations of the MobileNetV3-Small-Pi derived student models to identify
optimal performance. Additionally, a thorough evaluation of different
distillation techniques was conducted to ascertain the most effective method
for model selection. Our findings revealed that the distilled models exhibited
comparable performance to the EcoVAD teacher model, indicating a promising
approach to overcoming computational barriers for real-time ecological
monitoring
Exploring convolutional, recurrent, and hybrid deep neural networks for speech and music detection in a large audio dataset
Audio signals represent a wide diversity of acoustic events, from background environmental noise to spoken
communication. Machine learning models such as neural networks have already been proposed for audio signal
modeling, where recurrent structures can take advantage of temporal dependencies. This work aims to study the
implementation of several neural network-based systems for speech and music event detection over a collection of
77,937 10-second audio segments (216 h), selected from the Google AudioSet dataset. These segments belong to
YouTube videos and have been represented as mel-spectrograms. We propose and compare two approaches. The
first one is the training of two different neural networks, one for speech detection and another for music detection.
The second approach consists on training a single neural network to tackle both tasks at the same time. The studied
architectures include fully connected, convolutional and LSTM (long short-term memory) recurrent networks.
Comparative results are provided in terms of classification performance and model complexity. We would like to
highlight the performance of convolutional architectures, specially in combination with an LSTM stage. The hybrid
convolutional-LSTM models achieve the best overall results (85% accuracy) in the three proposed tasks. Furthermore,
a distractor analysis of the results has been carried out in order to identify which events in the ontology are the most
harmful for the performance of the models, showing some difficult scenarios for the detection of music and speechThis work has been supported by project “DSSL: Redes Profundas y Modelos
de Subespacios para Deteccion y Seguimiento de Locutor, Idioma y
Enfermedades Degenerativas a partir de la Voz” (TEC2015-68172-C2-1-P),
funded by the Ministry of Economy and Competitivity of Spain and FEDE
- …