9 research outputs found
Learning to Embed Words in Context for Syntactic Tasks
We present models for embedding words in the context of surrounding words.
Such models, which we refer to as token embeddings, represent the
characteristics of a word that are specific to a given context, such as word
sense, syntactic category, and semantic role. We explore simple, efficient
token embedding models based on standard neural network architectures. We learn
token embeddings on a large amount of unannotated text and evaluate them as
features for part-of-speech taggers and dependency parsers trained on much
smaller amounts of annotated data. We find that predictors endowed with token
embeddings consistently outperform baseline predictors across a range of
context window and training set sizes.Comment: Accepted by ACL 2017 Repl4NLP worksho
Do Multi-Sense Embeddings Improve Natural Language Understanding?
Learning a distinct representation for each sense of an ambiguous word could
lead to more powerful and fine-grained models of vector-space representations.
Yet while `multi-sense' methods have been proposed and tested on artificial
word-similarity tasks, we don't know if they improve real natural language
understanding tasks. In this paper we introduce a multi-sense embedding model
based on Chinese Restaurant Processes that achieves state of the art
performance on matching human word similarity judgments, and propose a
pipelined architecture for incorporating multi-sense embeddings into language
understanding.
We then test the performance of our model on part-of-speech tagging, named
entity recognition, sentiment analysis, semantic relation identification and
semantic relatedness, controlling for embedding dimensionality. We find that
multi-sense embeddings do improve performance on some tasks (part-of-speech
tagging, semantic relation identification, semantic relatedness) but not on
others (named entity recognition, various forms of sentiment analysis). We
discuss how these differences may be caused by the different role of word sense
information in each of the tasks. The results highlight the importance of
testing embedding models in real applications
Breaking Sticks and Ambiguities with Adaptive Skip-gram
Recently proposed Skip-gram model is a powerful method for learning
high-dimensional word representations that capture rich semantic relationships
between words. However, Skip-gram as well as most prior work on learning word
representations does not take into account word ambiguity and maintain only
single representation per word. Although a number of Skip-gram modifications
were proposed to overcome this limitation and learn multi-prototype word
representations, they either require a known number of word meanings or learn
them using greedy heuristic approaches. In this paper we propose the Adaptive
Skip-gram model which is a nonparametric Bayesian extension of Skip-gram
capable to automatically learn the required number of representations for all
words at desired semantic resolution. We derive efficient online variational
learning algorithm for the model and empirically demonstrate its efficiency on
word-sense induction task
SENTIMENT ANALYSIS OF CHINESE MICROBLOG MESSAGE USING NEURAL NETWORK-BASED VECTOR REPRESENTATION FOR MEASURING REGIONAL PREJUDICE
Regional prejudice is prevalent in Chinese cities in which native residents and migrants lack a basic level of trust in the other group. Like Twitter, Sina Weibo is a social media platform where people actively engage in discussions on various social issues. Thus, it provides a good data source for measuring individuals’ regional prejudice on a large scale. We find that a resentful tone dominates in Weibo messages related to migrants. In this paper, we propose a novel approach, named DKV, for recognizing polarity and direction of sentiment for Weibo messages using distributed real-valued vector representation of keywords learned from neural networks. Such a representation can project rich context information (or embedding) into the vector space, and subsequently be used to infer similarity measures among words, sentences, and even documents. We provide a comprehensive performance evaluation to demonstrate that by exploiting the keyword embeddings, DKV paired with support vector machines can effectively recognize a Weibo message into the predefined sentiment and its direction. Results demonstrate that our method can achieve the best performances compared to other approaches
Xu: An Automated Query Expansion and Optimization Tool
The exponential growth of information on the Internet is a big challenge for
information retrieval systems towards generating relevant results. Novel
approaches are required to reformat or expand user queries to generate a
satisfactory response and increase recall and precision. Query expansion (QE)
is a technique to broaden users' queries by introducing additional tokens or
phrases based on some semantic similarity metrics. The tradeoff is the added
computational complexity to find semantically similar words and a possible
increase in noise in information retrieval. Despite several research efforts on
this topic, QE has not yet been explored enough and more work is needed on
similarity matching and composition of query terms with an objective to
retrieve a small set of most appropriate responses. QE should be scalable,
fast, and robust in handling complex queries with a good response time and
noise ceiling. In this paper, we propose Xu, an automated QE technique, using
high dimensional clustering of word vectors and Datamuse API, an open source
query engine to find semantically similar words. We implemented Xu as a command
line tool and evaluated its performances using datasets containing news
articles and human-generated QEs. The evaluation results show that Xu was
better than Datamuse by achieving about 88% accuracy with reference to the
human-generated QE.Comment: Accepted to IEEE COMPSAC 201
Recent Developments in Smart Healthcare
Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine
Learning Word Representation Considering Proximity and Ambiguity
Distributed representations of words (aka word embedding) have proven helpful in solving natural language processing (NLP) tasks. Training distributed representations of words with neural networks has lately been a major focus of researchers in the field. Recent work on word embedding, the Continuous Bag-of-Words (CBOW) model and the Continuous Skip-gram (Skip-gram) model, have produced particularly impressive results, significantly speeding up the training process to enable word representation learning from large-scale data. However, both CBOW and Skip-gram do not pay enough attention to word proximity in terms of model or word ambiguity in terms of linguistics. In this paper, we propose Proximity-Ambiguity Sensitive (PAS) models (i.e. PAS CBOW and PAS Skip-gram) to produce high quality distributed representations of words considering both word proximity and ambiguity. From the model perspective, we introduce proximity weights as parameters to be learned in PAS CBOW and used in PAS Skip-gram. By better modeling word proximity, we reveal the strength of pooling-structured neural networks in word representation learning. The proximity-sensitive pooling layer can also be applied to other neural network applications that employ pooling layers. From the linguistics perspective, we train multiple representation vectors per word. Each representation vector corresponds to a particular group of POS tags of the word. By using PAS models, we achieved a 16.9% increase in accuracy over state-of-the-art models