2 research outputs found
ESPnet-ONNX: Bridging a Gap Between Research and Production
In the field of deep learning, researchers often focus on inventing novel
neural network models and improving benchmarks. In contrast, application
developers are interested in making models suitable for actual products, which
involves optimizing a model for faster inference and adapting a model to
various platforms (e.g., C++ and Python). In this work, to fill the gap between
the two, we establish an effective procedure for optimizing a PyTorch-based
research-oriented model for deployment, taking ESPnet, a widely used toolkit
for speech processing, as an instance. We introduce different techniques to
ESPnet, including converting a model into an ONNX format, fusing nodes in a
graph, and quantizing parameters, which lead to approximately 1.3-2
speedup in various tasks (i.e., ASR, TTS, speech translation, and spoken
language understanding) while keeping its performance without any additional
training. Our ESPnet-ONNX will be publicly available at
https://github.com/espnet/espnet_onnxComment: Accepted to APSIPA ASC 202
A Comparative Study on Transformer vs RNN in Speech Applications
Sequence-to-sequence models have been widely used in end-to-end speech
processing, for example, automatic speech recognition (ASR), speech translation
(ST), and text-to-speech (TTS). This paper focuses on an emergent
sequence-to-sequence model called Transformer, which achieves state-of-the-art
performance in neural machine translation and other natural language processing
applications. We undertook intensive studies in which we experimentally
compared and analyzed Transformer and conventional recurrent neural networks
(RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS
benchmarks. Our experiments revealed various training tips and significant
performance benefits obtained with Transformer for each task including the
surprising superiority of Transformer in 13/15 ASR benchmarks in comparison
with RNN. We are preparing to release Kaldi-style reproducible recipes using
open source and publicly available datasets for all the ASR, ST, and TTS tasks
for the community to succeed our exciting outcomes.Comment: Accepted at ASRU 201