8,314 research outputs found
Towards Structured Deep Neural Network for Automatic Speech Recognition
In this paper we propose the Structured Deep Neural Network (Structured DNN)
as a structured and deep learning algorithm, learning to find the best
structured object (such as a label sequence) given a structured input (such as
a vector sequence) by globally considering the mapping relationships between
the structure rather than item by item.
When automatic speech recognition is viewed as a special case of such a
structured learning problem, where we have the acoustic vector sequence as the
input and the phoneme label sequence as the output, it becomes possible to
comprehensively learned utterance by utterance as a whole, rather than frame by
frame.
Structured Support Vector Machine (structured SVM) was proposed to perform
ASR with structured learning previously, but limited by the linear nature of
SVM. Here we propose structured DNN to use nonlinear transformations in
multi-layers as a structured and deep learning algorithm. It was shown to beat
structured SVM in preliminary experiments on TIMIT
Unsupervised Spoken Term Detection with Spoken Queries by Multi-level Acoustic Patterns with Varying Model Granularity
This paper presents a new approach for unsupervised Spoken Term Detection
with spoken queries using multiple sets of acoustic patterns automatically
discovered from the target corpus. The different pattern HMM
configurations(number of states per model, number of distinct models, number of
Gaussians per state)form a three-dimensional model granularity space. Different
sets of acoustic patterns automatically discovered on different points properly
distributed over this three-dimensional space are complementary to one another,
thus can jointly capture the characteristics of the spoken terms. By
representing the spoken content and spoken query as sequences of acoustic
patterns, a series of approaches for matching the pattern index sequences while
considering the signal variations are developed. In this way, not only the
on-line computation load can be reduced, but the signal distributions caused by
different speakers and acoustic conditions can be reasonably taken care of. The
results indicate that this approach significantly outperformed the unsupervised
feature-based DTW baseline by 16.16\% in mean average precision on the TIMIT
corpus.Comment: Accepted by ICASSP 201
- …