28 research outputs found
Enriching Rare Word Representations in Neural Language Models by Embedding Matrix Augmentation
The neural language models (NLM) achieve strong generalization capability by
learning the dense representation of words and using them to estimate
probability distribution function. However, learning the representation of rare
words is a challenging problem causing the NLM to produce unreliable
probability estimates. To address this problem, we propose a method to enrich
representations of rare words in pre-trained NLM and consequently improve its
probability estimation performance. The proposed method augments the word
embedding matrices of pre-trained NLM while keeping other parameters unchanged.
Specifically, our method updates the embedding vectors of rare words using
embedding vectors of other semantically and syntactically similar words. To
evaluate the proposed method, we enrich the rare street names in the
pre-trained NLM and use it to rescore 100-best hypotheses output from the
Singapore English speech recognition system. The enriched NLM reduces the word
error rate by 6% relative and improves the recognition accuracy of the rare
words by 16% absolute as compared to the baseline NLM.Comment: 5 pages, 2 figures, accepted to INTERSPEECH 201
A Fully Attention-Based Information Retriever
Recurrent neural networks are now the state-of-the-art in natural language
processing because they can build rich contextual representations and process
texts of arbitrary length. However, recent developments on attention mechanisms
have equipped feedforward networks with similar capabilities, hence enabling
faster computations due to the increase in the number of operations that can be
parallelized. We explore this new type of architecture in the domain of
question-answering and propose a novel approach that we call Fully Attention
Based Information Retriever (FABIR). We show that FABIR achieves competitive
results in the Stanford Question Answering Dataset (SQuAD) while having fewer
parameters and being faster at both learning and inference than rival methods.Comment: Accepted for presentation at the International Joint Conference on
Neural Networks (IJCNN) 201
Adapting Sequence to Sequence models for Text Normalization in Social Media
Social media offer an abundant source of valuable raw data, however informal
writing can quickly become a bottleneck for many natural language processing
(NLP) tasks. Off-the-shelf tools are usually trained on formal text and cannot
explicitly handle noise found in short online posts. Moreover, the variety of
frequently occurring linguistic variations presents several challenges, even
for humans who might not be able to comprehend the meaning of such posts,
especially when they contain slang and abbreviations. Text Normalization aims
to transform online user-generated text to a canonical form. Current text
normalization systems rely on string or phonetic similarity and classification
models that work on a local fashion. We argue that processing contextual
information is crucial for this task and introduce a social media text
normalization hybrid word-character attention-based encoder-decoder model that
can serve as a pre-processing step for NLP applications to adapt to noisy text
in social media. Our character-based component is trained on synthetic
adversarial examples that are designed to capture errors commonly found in
online user-generated text. Experiments show that our model surpasses neural
architectures designed for text normalization and achieves comparable
performance with state-of-the-art related work.Comment: Accepted at the 13th International AAAI Conference on Web and Social
Media (ICWSM 2019