3 research outputs found
Análisis de compensación de variabilidad en reconocimiento de locutor aplicado a duraciones cortas
En este proyecto se estudian, implementan y evalúan sistemas automáticos de reconocimiento
de locutor en presencia de locuciones de duración corta. Para llevarlo a cabo se han utilizado
y comparado diversas técnicas del estado del arte en reconocimiento de locutor así como su
adaptación a locuciones de corta duración.
Como punto de partida del proyecto se ha realizado un estudio de las diferentes técnicas que
han ido marcando el estado del arte, destacando las que han conseguido una mejoría notable
en evaluaciones promovidas por el National Institute of Standards and Technology (NIST) de
reconocimiento de locutor durante la última década.
Una vez entendido el estado del arte desde el punto de vista teórico el siguiente paso se define
la tarea sobre la que se evaluarán las diferentes técnicas. Históricamente, la tarea principal en
evaluaciones NIST consiste en entrenar el modelo de locutor con una conversación, de aproximadamente
150 segundos, y realizar la verificación de usuario frente a una locución de la misma
duración. En la tarea que se desarrolla durante la realización de este proyecto disponemos de
locuciones con una duración mucho más limitada, aproximadamente 10 segundos, provenientes
de evaluaciones NIST de reconocimiento de locutor.
Para la parte experimental se llevaron a cabo dos fases de experimentos. Durante la primera
fase el objetivo ha sido comparar y analizar las diferencias entre dos técnicas del estado del
arte basadas en Factor Analysis (FA), Total Variability (TV) y Probabilistic Linear Discriminant
Analysis (PLDA), evaluando principalmente el rendimiento de éstas técnicas sobre nuestro
entorno experimental que seguirá el protocolo de las evaluaciones NIST. En la segunda fase se
hace un ajuste de los parámetros de dichas técnicas para comprobar el impacto de los mismos
en presencia de duraciones cortas y mejorar el rendimiento de los sistemas con escasez de datos.
Para ello evaluamos el sistema en base a dos medidas, la tasa de error y la función de coste que
suele emplearse en dicha evaluación, que será detallada en los siguientes capítulos.
Finalmente, se presentan las conclusiones extraídas a lo largo de este trabajo, así como las
líneas de trabajo futuro.
Parte del trabajo llevado a cabo durante la ejecución de este Proyecto Final de Carrera ha
sido publicado en la conferencia de carácter internacional IberSpeech 2012 [1]:
Javier Gonzalez-Dominguez, Ruben Zazo, and Joaquin Gonzalez-Rodriguez. “On the use of
total variability and probabilistic linear discriminant analysis for speaker verification on short
utterances”.
i
Análisis de compensación de variabilidad en reconocimiento de locutor aplicado a duracionesThis project is focused on automatic speaker verification (SV) systems dealing with short
duration utterances ( 10s). Despite the enormous advances in the field, the broad use of SV in
real scenarios remains a challenge mostly due to two factors. First, the session variability; that
is, the set of difference among utterances belonging to the same speaker. Second, the system
performance degradation when dealing with short duration utterances.
As an starting point of this project, an exhaustive study of the state-of-the-art speaker verification
techniques has been conducted. This, with special focus on those methods, which achieved
outstanding results and open the door to better SV systems. In that sense, we put particular
emphasis on the recent methods based on Factor Analysis (FA) namely, Total Variability (TV)
and Probabilistic Linear Discriminant Analysis (PLDA). Those methods have become the state
of the art in the field due to their ability of mitigating the session variability problem
In order to assess the behaviour of those systems, we use the data and follow the protocol
defined by the US National Institute of Standards and Technology (NIST) in its Speaker Recognition
Evaluation series (SRE). Particularly, we follow the SRE2010 protocol, but adapted to
the short durations problems. Thus, instead of using 150s duration utterances as defined in the
core task of SRE2010, we experiment with 10s duration utterance in both training and testing.
The experiments conducted can be divided in two phases. During the first phase we study,
compare and evaluate the use of TV and PLDA as effective methods to perform SV. Second
phase is then devoted to adapt those methods to the short duration scenarios. We analyse in
this point the effect and importance of the multiple parameters of the systems when facing to
limited data for both training and testing. Conclusions and future lines of this work are then
presented.
Part of this work has been published on the international conference IberSpeech 2012 [1]:
Javier Gonzalez-Dominguez, Ruben Zazo, and Joaquin Gonzalez-Rodriguez. “On the use of
total variability and probabilistic linear discriminant analysis for speaker verification on short
utterances”
Abstract Gaussian-selection-based non-optimal search for speaker identification
Most speaker identification systems train individual models for each speaker. This is done as individual models often yield better performance and they permit easier adaptation and enrollment. When classifying a speech token, the token is scored against each model and the maximum a priori decision rule is used to decide the classification label. Consequently, the cost of classification grows linearly for each token as the population size grows. When considering that the number of tokens to classify is also likely to grow linearly with the population, the total work load increases exponentially. This paper presents a preclassifier which generates an N-best hypothesis using a novel application of Gaussian selection, and a transformation of the traditional tail test statistic which lets the implementer specify the tail region in terms of probability. The system is trained using parameters of individual speaker models and does not require the original feature vectors, even when enrolling new speakers or adapting existing ones. As the correct class label need only be in the N-best hypothesis set, it is possible to prune more Gaussians than in a traditional Gaussian selection application. The N-best hypothesis set is then evaluated using individual speaker models, resulting in an overall reduction of workload