195 research outputs found
Robust Speech Detection for Noisy Environments
This paper presents a robust voice activity detector (VAD) based on hidden Markov models (HMM) to improve speech recognition systems in stationary and non-stationary noise environments: inside motor vehicles (like cars or planes) or inside buildings close to high traffic places (like in a control tower for air traffic control (ATC)). In these environments, there is a high stationary noise level caused by vehicle motors and additionally, there could be people speaking at certain distance from the main speaker producing non-stationary noise. The VAD presented in this paper is characterized by a new front-end and a noise level adaptation process that increases significantly the VAD robustness for different signal to noise ratios (SNRs). The feature vector used by the VAD includes the most relevant Mel Frequency Cepstral Coefficients (MFCC), normalized log energy and delta log energy. The proposed VAD has been evaluated and compared to other well-known VADs using three databases containing different noise conditions: speech in clean environments (SNRs mayor que 20 dB), speech recorded in stationary noise environments (inside or close to motor vehicles), and finally, speech in non stationary environments (including noise from bars, television and far-field speakers). In the three cases, the detection error obtained with the proposed VAD is the lowest for all SNRs compared to Acero¿s VAD (reference of this work) and other well-known VADs like AMR, AURORA or G729 annex b
Speech Denoising Using Non-Negative Matrix Factorization with Kullback-Leibler Divergence and Sparseness Constraints
Proceedings of: IberSPEECH 2012 Conference, Madrid, Spain, November 21-23, 2012.A speech denoising method based on Non-Negative Matrix Factorization (NMF) is presented in this paper. With respect to previous related works, this paper makes two contributions. First, our method does not assume a priori knowledge about the nature of the noise. Second, it combines the use of the Kullback-Leibler divergence with sparseness constraints on the activation matrix, improving the performance of similar techniques that minimize the Euclidean distance and/or do not consider any sparsification. We evaluate the proposed method for both, speech enhancement and automatic speech recognitions tasks, and compare it to conventional spectral subtraction, showing improvements in speech quality and recognition accuracy, respectively, for different noisy conditions.This work has been partially supported by the Spanish Government grants TSI-020110-2009-103 and TEC2011-26807.Publicad
Combining pulse-based features for rejecting far-field speech in a HMM-based Voice Activity Detector. Computers & Electrical Engineering (CAEE).
Nowadays, several computational techniques for speech recognition have been proposed. These techniques suppose an important improvement in real time applications where speaker interacts with speech recognition systems. Although researchers proposed many methods, none of them solve the high false alarm problem when far-field speakers interfere in a human-machine conversation. This paper presents a two-class (speech and non-speech classes) decision-tree based approach for combining new speech pulse features in a VAD (Voice Activity Detector) for rejecting far-field speech in speech recognition systems. This Decision Tree is applied over the speech pulses obtained by a baseline VAD composed of a frame feature extractor, a HMM-based (Hidden Markov Model) segmentation module and a pulse detector. The paper also presents a detailed analysis of a great amount of features for discriminating between close and far-field speech. The detection error obtained with the proposed VAD is the lowest compared to other well-known VAD
Local binary patterns for 1-D signal processing
Local Binary Patterns (LBP) have been used in 2-D image processing for applications such as texture segmentation and feature detection. In this paper a new 1-dimensional local binary pattern (LBP) signal processing method is presented. Speech systems such as hearing aids require fast and computationally inexpensive signal processing. The practical use of LBP based speech processing is demonstrated on two signal processing problems: - (i) signal segmentation and (ii) voice activity detection (VAD). Both applications use the underlying features extracted from the 1-D LBP. The proposed VAD algorithm demonstrates the simplicity of 1-D LBP processing with low computational complexity. It is also shown that distinct LBP features are obtained to identify the voiced and the unvoiced components of speech signal
Speaker Diarization
In this thesis we document the development of a system to perform Speaker Diarization, that is, automatically trying to identify who spoke when in a conversation or any other piece of speech with several speakers. The intended usage is to be able to provide this functionality for broadcast news, with data provided by the Finnish broadcasting company YLE under the Next Media programme, financed by TEKES, the Finnish Funding Agency for Technology and Innovation.
Another goal is to produce a system compatible with existing Aalto University speech recognition software, in order to open the door to future improvements and research.
The produced system, a newly implementation of established methods, with the parameters we determined were the best for our use case, obtains a performance that is very close to current stat-of-the-art systems, while still being compatible with the existing speech recognition software of the Aalto University and having a reasonable speed performance. Further improvements to the system are being made as we speech, opening the door to more research options
Latent Class Model with Application to Speaker Diarization
In this paper, we apply a latent class model (LCM) to the task of speaker
diarization. LCM is similar to Patrick Kenny's variational Bayes (VB) method in
that it uses soft information and avoids premature hard decisions in its
iterations. In contrast to the VB method, which is based on a generative model,
LCM provides a framework allowing both generative and discriminative models.
The discriminative property is realized through the use of i-vector (Ivec),
probabilistic linear discriminative analysis (PLDA), and a support vector
machine (SVM) in this work. Systems denoted as LCM-Ivec-PLDA, LCM-Ivec-SVM, and
LCM-Ivec-Hybrid are introduced. In addition, three further improvements are
applied to enhance its performance. 1) Adding neighbor windows to extract more
speaker information for each short segment. 2) Using a hidden Markov model to
avoid frequent speaker change points. 3) Using an agglomerative hierarchical
cluster to do initialization and present hard and soft priors, in order to
overcome the problem of initial sensitivity. Experiments on the National
Institute of Standards and Technology Rich Transcription 2009 speaker
diarization database, under the condition of a single distant microphone, show
that the diarization error rate (DER) of the proposed methods has substantial
relative improvements compared with mainstream systems. Compared to the VB
method, the relative improvements of LCM-Ivec-PLDA, LCM-Ivec-SVM, and
LCM-Ivec-Hybrid systems are 23.5%, 27.1%, and 43.0%, respectively. Experiments
on our collected database, CALLHOME97, CALLHOME00 and SRE08 short2-summed trial
conditions also show that the proposed LCM-Ivec-Hybrid system has the best
overall performance
Bayesian and echoic log-surprise for auditory saliency detection
Mención Internacional en el título de doctorAttention is defined as the mechanism that allows the brain to categorize
and prioritize information acquired using our senses and act according to
the environmental context and the available mental resources. The attention
mechanism can be further subdivided into two types: top-down and bottomup.
Top-down attention is goal or task-driven and implies that a participant
has some previous knowledge about the task that he or she is trying to solve.
Alternatively, bottom-up attention only depends on the perceived features
of the target object and its surroundings and is a very fast mechanism that
is believed to be crucial for human survival.
Bottom-up attention is commonly known as saliency or salience, and can
be defined as a property of the signals that are perceived by our senses that
make them attentionally prominent for some reason.
This thesis is related with the concept of saliency detection using automatic
algorithms for audio signals. In recent years progress in the area of
visual saliency research has been remarkable, a topic where the goal consists
of detecting which objects or content from a visual scene are prominent
enough to capture the attention of a spectator. However, this progress has
not been carried out to other alternative modalities. This is the case of auditory
saliency, where there is still no consensus about how to measure the
saliency of an event, and consequently there are no specific labeled datasets
to compare new algorithms and proposals.
In this work two new auditory saliency detection algorithms are presented
and evaluated. For their evaluation, we make use of Acoustic Event
Detection/Classification datasets, whose labels include onset times among
other aspects. We use such datasets and labeling since there is psychological
evidence suggesting that human beings are quite sensitive to the spontaneous
appearance of acoustic objects. We use three datasets: DCASE 2016
(Task 2), MIVIA road audio events and UPC-TALP, totalling 3400 labeled
acoustic events. Regarding the algorithms that we employ for benchmarking,
these comprise techniques for saliency detection designed by Kayser and
Kalinli, a voice activity detector, an energy thresholding method and four
music information retrieval onset detectors: NWPD, WPD, CD and SF.
We put forward two auditory saliency algorithms: Bayesian Log-surprise
and Echoic Log-surprise. The former is an evolution of Bayesian Surprise,
a methodology that by means of the Kullback-Leibler divergence computed
between two consecutive temporal windows is capable of detecting anomalous
or salient events. As the output Surprise signal has some drawbacks
that should be overcome, we introduce some improvements that led to the
approach that we named Bayesian Log-surprise. These include an amplitude
compression stage and the addition of perceptual knowledge to pre-process
the input signal.
The latter, named Echoic Log-surprise, fuses several Bayesian Log-surprise signals computed considering different memory lengths that represent different
temporal scales. The fusion process is performed using statistical
divergences, resulting in saliency signals with certain advantages such as a
significant reduction in the background noise level and a noticeable increase
in the detection scores.
Moreover, since the original Echoic Log-surprise presents certain limitations,
we propose a set of improvements: we test some alternative statistical
divergences, we introduce a new fusion strategy and we change the thresholding
mechanism used to determine if the final output signal is salient or
not for a dynamic thresholding algorithm. Results show that the most significant
modification in terms of performance is the latter, a proposal that
reduces the dispersion observed in the scores produced by the system and
enables online functioning.
Finally, our last analysis concerns the robustness of all the algorithms
presented in this thesis against environmental noise. We use noises of different
natures, from stationary noise to pre-recorded noises acquired in real
environments such as cafeterias, train stations, etc. The results suggest
that for different signal-to-noise ratios the most robust algorithm is Echoic
Log-surprise, since its detection capabilities are the least influenced by noise.La atención es definida como el mecanismo que permite a nuestro cerebro
categorizar y priorizar la información percibida mediante nuestros sentidos,
a la par que ayuda a actuar en función del contexto y los recursos mentales
disponibles. Este mecanismo puede dividirse en dos variantes: top-down y
bottom-up. La atención top-down posee un objetivo que el sujeto pretende
cumplir, e implica que el individuo posee cierto conocimiento previo sobre la
tarea que trata de realizar. Por otra parte, la atención bottom-up depende
exclusivamente de las características físicas percibidas a partir de un objeto
y su entorno, y actúa a partir de dicha información de forma autónoma y
rápida. Se teoriza que dicho mecanismo es crucial para la supervivencia de
los individuos frente a amenazas repentinas.
La atención bottom-up es comúnmente denominada saliencia, y es definida
como una propiedad de las señales que son percibidas por nuestros sentidos
y que por algún motivo destacan sobre el resto de información adquirida.
Esta tesis está relacionada con la detección automática de la saliencia en
señales acústicas mediante la utilización de algoritmos. En los últimos años
el avance en la investigación de la saliencia visual ha sido notable, un tema
en el cual la principal meta consiste en detectar qué objetos o contenido
de una escena visual son lo bastante prominentes para captar la atención
de un espectador. Sin embargo, estos avances no han sido trasladados a
otras modalidades. Tal es el caso de la saliencia auditiva, donde aún no
existe consenso sobre cómo medir la prominencia de un evento acústico,
y en consecuencia no existen bases de datos especializadas que permitan
comparar nuevos algoritmos y modelos.
En este trabajo evaluamos algunos algoritmos de detección de saliencia
auditiva. Para ello, empleamos bases de datos para la detección y clasificación
de eventos acústicos, cuyas etiquetas incluyen el tiempo de inicio
(onset) de dichos eventos entre otras características. Nuestra hipótesis se
basa en estudios psicológicos que sugieren que los seres humanos somos muy
sensibles a la aparición de objetos acústicos. Empleamos tres bases de datos:
DCASE 2016 (Task 2), MIVIA road audio events y UPC-TALP, las cuales
suman en total 3400 eventos etiquetados. Respecto a los algoritmos utilizados
en nuestro sistema de referencia (benchmark), incluimos los algoritmos
de saliencia diseñados por Kayser y Kalinli, un detector de actividad vocal
(VAD), un umbralizador energético y cuatro técnicas para la detección de
onsets en música: NWPD, WPD, CD and SF.
Presentamos dos algoritmos de saliencia auditiva: Bayesian Log-surprise
y Echoic Log-surprise. El primero es una evolución de Bayesian Surprise,
una metodología que utiliza la divergencia de Kullback-Leibler para detectar
eventos salientes o anomalías entre ventanas consecutivas de tiempo. Dado
que la señal producida por Bayesian Surprise posee ciertos inconvenientes
introducimos una serie de mejoras, entre las que destacan una etapa de compresión de la amplitud de la señal de salida y el pre-procesado de la señal de
entrada mediante la utilización de conocimiento perceptual. Denominamos
a esta metodología Bayesian Log-surprise.
Nuestro segundo algoritmo, denominado Echoic Log-surprise, combina la
información de múltiples señales de saliencia producidas mediante Bayesian
Log-surprise considerando distintas escalas temporales. El proceso de fusión
se realiza mediante la utilización de divergencias estadísticas, y las señales
de salida poseen un nivel de ruido menor a la par que un mayor rendimiento
a la hora de detectar eventos salientes.
Además, proponemos una serie de mejoras para Echoic Log-surprise
dado que observamos que presentaba ciertas limitaciones: añadimos nuevas
divergencias estadísticas al sistema para realizar la fusión, diseñamos una
nueva estrategia para llevar a cabo dicho proceso y modificamos el sistema de
umbralizado que originalmente se utilizaba para determinar si un fragmento
de señal era saliente o no. Inicialmente dicho mecanismo era estático, y
proponemos actualizarlo de tal forma se comporte de forma dinámica. Esta
última demuestra ser la mejora más significativa en términos de rendimiento,
ya que reduce la dispersión observada en las puntuaciones de evaluación entre
distintos ficheros de audio, a la par que permite que el algoritmo funcione
online.
El último análisis que proponemos pretende estudiar la robustez de los
algoritmos mencionados en esta tesis frente a ruido ambiental. Empleamos
ruido de diversa índole, desde ruido blanco estacionario hasta señales pregrabadas
en entornos reales tales y como cafeterías, estaciones de tren, etc.
Los resultados sugieren que para distintos valores de relación señal/ruido el
algoritmo más robusto es Echoic Log-surprise, dado que sus capacidades de
detección son las menos afectadas por el ruido.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Fernando Díaz de María.- Secretario: Rubén Solera Ureña.- Vocal: José Luis Pérez Córdob
- …