16,384 research outputs found
Deep word embeddings for visual speech recognition
In this paper we present a deep learning architecture for extracting word embeddings for visual speech recognition. The embeddings summarize the information of the mouth region that is relevant to the problem of word recognition, while suppressing other types of variability such as speaker, pose and illumination. The system is comprised of a spatiotemporal convolutional layer, a Residual Network and bidirectional LSTMs and is trained on the Lipreading in-the-wild database. We first show that the proposed architecture goes beyond state-of-the-art on closed-set word identification, by attaining 11.92% error rate on a vocabulary of 500 words. We then examine the capacity of the embeddings in modelling words unseen during training. We deploy Probabilistic Linear Discriminant Analysis (PLDA) to model the embeddings and perform low-shot learning experiments on words unseen during training. The experiments demonstrate that word-level visual speech recognition is feasible even in cases where the target words are not included in the training set
Do Multi-Sense Embeddings Improve Natural Language Understanding?
Learning a distinct representation for each sense of an ambiguous word could
lead to more powerful and fine-grained models of vector-space representations.
Yet while `multi-sense' methods have been proposed and tested on artificial
word-similarity tasks, we don't know if they improve real natural language
understanding tasks. In this paper we introduce a multi-sense embedding model
based on Chinese Restaurant Processes that achieves state of the art
performance on matching human word similarity judgments, and propose a
pipelined architecture for incorporating multi-sense embeddings into language
understanding.
We then test the performance of our model on part-of-speech tagging, named
entity recognition, sentiment analysis, semantic relation identification and
semantic relatedness, controlling for embedding dimensionality. We find that
multi-sense embeddings do improve performance on some tasks (part-of-speech
tagging, semantic relation identification, semantic relatedness) but not on
others (named entity recognition, various forms of sentiment analysis). We
discuss how these differences may be caused by the different role of word sense
information in each of the tasks. The results highlight the importance of
testing embedding models in real applications
Speaker Diarization with Lexical Information
This work presents a novel approach for speaker diarization to leverage
lexical information provided by automatic speech recognition. We propose a
speaker diarization system that can incorporate word-level speaker turn
probabilities with speaker embeddings into a speaker clustering process to
improve the overall diarization accuracy. To integrate lexical and acoustic
information in a comprehensive way during clustering, we introduce an adjacency
matrix integration for spectral clustering. Since words and word boundary
information for word-level speaker turn probability estimation are provided by
a speech recognition system, our proposed method works without any human
intervention for manual transcriptions. We show that the proposed method
improves diarization performance on various evaluation datasets compared to the
baseline diarization system using acoustic information only in speaker
embeddings
Named Entity Recognition in Spanish Biomedical Literature: Short Review and Bert Model
Named Entity Recognition (NER) is the rst step for knowledge acquisition when we deal with an unknown corpus of texts. Having received these entities, we have an opportunity to form parameters space and to solve problems of text mining as concept normalization, speech recognition, etc. The recent advances in NER are related to the technology of word embeddings, which transforms text to the form being effective for Deep Learning. In the paper, we show how NER detects pharmacological substances, compounds, and proteins in the dataset obtained from the Spanish Clinical Case Corpus (SPACCC). To achieve this goal, we use contextualized word embeddings based on BERT language representation, which shows better results than the standard word embeddings
- …