This paper introduces a novel Token-and-Duration Transducer (TDT)
architecture for sequence-to-sequence tasks. TDT extends conventional
RNN-Transducer architectures by jointly predicting both a token and its
duration, i.e. the number of input frames covered by the emitted token. This is
achieved by using a joint network with two outputs which are independently
normalized to generate distributions over tokens and durations. During
inference, TDT models can skip input frames guided by the predicted duration
output, which makes them significantly faster than conventional Transducers
which process the encoder output frame by frame. TDT models achieve both better
accuracy and significantly faster inference than conventional Transducers on
different sequence transduction tasks. TDT models for Speech Recognition
achieve better accuracy and up to 2.82X faster inference than RNN-Transducers.
TDT models for Speech Translation achieve an absolute gain of over 1 BLEU on
the MUST-C test compared with conventional Transducers, and its inference is
2.27X faster. In Speech Intent Classification and Slot Filling tasks, TDT
models improve the intent accuracy up to over 1% (absolute) over conventional
Transducers, while running up to 1.28X faster