3,901 research outputs found
Bandwidth extension of narrowband speech
Recently, 4G mobile phone systems have been
designed to process wideband speech signals whose
sampling frequency is 16 kHz. However, most part of
mobile and classical phone network, and current 3G
mobile phones, still process narrowband speech signals
whose sampling frequency is 8 kHz. During next future,
all these systems must be living together. Therefore,
sometimes a wideband speech signal (with a bandwidth up
to 7,2 kHz) should be estimated from an available
narrowband one (whose frequency band is 300-3400 Hz).
In this work, different techniques of audio bandwidth
extension have been implemented and evaluated. First, a
simple non-model-based algorithm (interpolation
algorithm) has been implemented. Second, a model-based
algorithm (linear mapping) have been designed and
evaluated in comparison to previous one. Several CMOS
(Comparison Mean Opinion Score) [6] listening tests show
that performance of Linear Mapping algorithm clearly
overcomes the other one. Results of these tests are very
close to those corresponding to original wideband speech
signal.Postprint (published version
State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Attention-based encoder-decoder architectures such as Listen, Attend, and
Spell (LAS), subsume the acoustic, pronunciation and language model components
of a traditional automatic speech recognition (ASR) system into a single neural
network. In previous work, we have shown that such architectures are comparable
to state-of-theart ASR systems on dictation tasks, but it was not clear if such
architectures would be practical for more challenging tasks such as voice
search. In this work, we explore a variety of structural and optimization
improvements to our LAS model which significantly improve performance. On the
structural side, we show that word piece models can be used instead of
graphemes. We also introduce a multi-head attention architecture, which offers
improvements over the commonly-used single-head attention. On the optimization
side, we explore synchronous training, scheduled sampling, label smoothing, and
minimum word error rate optimization, which are all shown to improve accuracy.
We present results with a unidirectional LSTM encoder for streaming
recognition. On a 12, 500 hour voice search task, we find that the proposed
changes improve the WER from 9.2% to 5.6%, while the best conventional system
achieves 6.7%; on a dictation task our model achieves a WER of 4.1% compared to
5% for the conventional system.Comment: ICASSP camera-ready versio
- …