19 research outputs found
Robustness to Capitalization Errors in Named Entity Recognition
Robustness to capitalization errors is a highly desirable characteristic of
named entity recognizers, yet we find standard models for the task are
surprisingly brittle to such noise. Existing methods to improve robustness to
the noise completely discard given orthographic information, mwhich
significantly degrades their performance on well-formed text. We propose a
simple alternative approach based on data augmentation, which allows the model
to \emph{learn} to utilize or ignore orthographic information depending on its
usefulness in the context. It achieves competitive robustness to capitalization
errors while making negligible compromise to its performance on well-formed
text and significantly improving generalization power on noisy user-generated
text. Our experiments clearly and consistently validate our claim across
different types of machine learning models, languages, and dataset sizes.Comment: Accepted to EMNLP 2019 Workshop : W-NUT 2019 5th Workshop on Noisy
User Generated Tex
Multi-teacher Distillation for Multilingual Spelling Correction
Accurate spelling correction is a critical step in modern search interfaces,
especially in an era of mobile devices and speech-to-text interfaces. For
services that are deployed around the world, this poses a significant challenge
for multilingual NLP: spelling errors need to be caught and corrected in all
languages, and even in queries that use multiple languages. In this paper, we
tackle this challenge using multi-teacher distillation. On our approach, a
monolingual teacher model is trained for each language/locale, and these
individual models are distilled into a single multilingual student model
intended to serve all languages/locales. In experiments using open-source data
as well as user data from a worldwide search service, we show that this leads
to highly effective spelling correction models that can meet the tight latency
requirements of deployed services
Dynamic Chuck Convolution For Unified Streaming And Non-streaming Conformer ASR
Recently, there has been an increasing interest in unifying streaming and
non-streaming speech recognition models to reduce development, training and
deployment cost. The best-known approaches rely on either window-based or
dynamic chunk-based attention strategy and causal convolutions to minimize the
degradation due to streaming. However, the performance gap still remains
relatively large between non-streaming and a full-contextual model trained
independently. To address this, we propose a dynamic chunk-based convolution
replacing the causal convolution in a hybrid Connectionist Temporal
Classification (CTC)-Attention Conformer architecture. Additionally, we
demonstrate further improvements through initialization of weights from a
full-contextual model and parallelization of the convolution and self-attention
modules. We evaluate our models on the open-source Voxpopuli, LibriSpeech and
in-house conversational datasets. Overall, our proposed model reduces the
degradation of the streaming mode over the non-streaming full-contextual model
from 41.7% and 45.7% to 16.7% and 26.2% on the LibriSpeech test-clean and
test-other datasets respectively, while improving by a relative 15.5% WER over
the previous state-of-the-art unified model.Comment: 5 pages, 3 figures, 2023 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP 2023
Multi Sense Embeddings from Topic Models
Distributed word embeddings have yielded state-of-the-art performance in many NLP tasks, mainly due to their success in capturing useful semantic information. These representations assign only a single vector to each word whereas a large number of words are polysemous (i.e., have multiple meanings). In this work, we approach this critical problem in lexical semantics, namely that of representing various senses of polysemous words in vector spaces. We propose a topic modeling based skip-gram approach for learning multi-prototype word embeddings. We also introduce a method to prune the embeddings determined by the probabilistic representation of the word in each topic. We use our embeddings to show that they can capture the context and word similarity strongly and outperform various state-of-the-art implementations