1,936 research outputs found
Joint Learning of Correlated Sequence Labelling Tasks Using Bidirectional Recurrent Neural Networks
The stream of words produced by Automatic Speech Recognition (ASR) systems is
typically devoid of punctuations and formatting. Most natural language
processing applications expect segmented and well-formatted texts as input,
which is not available in ASR output. This paper proposes a novel technique of
jointly modeling multiple correlated tasks such as punctuation and
capitalization using bidirectional recurrent neural networks, which leads to
improved performance for each of these tasks. This method could be extended for
joint modeling of any other correlated sequence labeling tasks.Comment: Accepted in Interspeech 201
Four-in-One: A Joint Approach to Inverse Text Normalization, Punctuation, Capitalization, and Disfluency for Automatic Speech Recognition
Features such as punctuation, capitalization, and formatting of entities are
important for readability, understanding, and natural language processing
tasks. However, Automatic Speech Recognition (ASR) systems produce spoken-form
text devoid of formatting, and tagging approaches to formatting address just
one or two features at a time. In this paper, we unify spoken-to-written text
conversion via a two-stage process: First, we use a single transformer tagging
model to jointly produce token-level tags for inverse text normalization (ITN),
punctuation, capitalization, and disfluencies. Then, we apply the tags to
generate written-form text and use weighted finite state transducer (WFST)
grammars to format tagged ITN entity spans. Despite joining four models into
one, our unified tagging approach matches or outperforms task-specific models
across all four tasks on benchmark test sets across several domains
- …