2,466 research outputs found
Modelling the Lexicon in Unsupervised Part of Speech Induction
Automatically inducing the syntactic part-of-speech categories for words in
text is a fundamental task in Computational Linguistics. While the performance
of unsupervised tagging models has been slowly improving, current
state-of-the-art systems make the obviously incorrect assumption that all
tokens of a given word type must share a single part-of-speech tag. This
one-tag-per-type heuristic counters the tendency of Hidden Markov Model based
taggers to over generate tags for a given word type. However, it is clearly
incompatible with basic syntactic theory. In this paper we extend a
state-of-the-art Pitman-Yor Hidden Markov Model tagger with an explicit model
of the lexicon. In doing so we are able to incorporate a soft bias towards
inducing few tags per type. We develop a particle filter for drawing samples
from the posterior of our model and present empirical results that show that
our model is competitive with and faster than the state-of-the-art without
making any unrealistic restrictions.Comment: To be presented at the 14th Conference of the European Chapter of the
Association for Computational Linguistic
Structured Prediction of Sequences and Trees using Infinite Contexts
Linguistic structures exhibit a rich array of global phenomena, however
commonly used Markov models are unable to adequately describe these phenomena
due to their strong locality assumptions. We propose a novel hierarchical model
for structured prediction over sequences and trees which exploits global
context by conditioning each generation decision on an unbounded context of
prior decisions. This builds on the success of Markov models but without
imposing a fixed bound in order to better represent global phenomena. To
facilitate learning of this large and unbounded model, we use a hierarchical
Pitman-Yor process prior which provides a recursive form of smoothing. We
propose prediction algorithms based on A* and Markov Chain Monte Carlo
sampling. Empirical results demonstrate the potential of our model compared to
baseline finite-context Markov models on part-of-speech tagging and syntactic
parsing
Misspelling Oblivious Word Embeddings
In this paper we present a method to learn word embeddings that are resilient
to misspellings. Existing word embeddings have limited applicability to
malformed texts, which contain a non-negligible amount of out-of-vocabulary
words. We propose a method combining FastText with subwords and a supervised
task of learning misspelling patterns. In our method, misspellings of each word
are embedded close to their correct variants. We train these embeddings on a
new dataset we are releasing publicly. Finally, we experimentally show the
advantages of this approach on both intrinsic and extrinsic NLP tasks using
public test sets.Comment: 9 Page
Combining independent modules to solve multiple-choice synonym and analogy problems
Existing statistical approaches to natural language problems are very
coarse approximations to the true complexity of language processing.
As such, no single technique will be best for all problem instances.
Many researchers are examining ensemble methods that combine the
output of successful, separately developed modules to create more
accurate solutions. This paper examines three merging rules for
combining probability distributions: the well known mixture rule, the
logarithmic rule, and a novel product rule. These rules were applied
with state-of-the-art results to two problems commonly used to assess
human mastery of lexical semantics -- synonym questions and analogy
questions. All three merging rules result in ensembles that are more
accurate than any of their component modules. The differences among the
three rules are not statistically significant, but it is suggestive
that the popular mixture rule is not the best rule for either of the
two problems
Learning Word Representations from Relational Graphs
Attributes of words and relations between two words are central to numerous
tasks in Artificial Intelligence such as knowledge representation, similarity
measurement, and analogy detection. Often when two words share one or more
attributes in common, they are connected by some semantic relations. On the
other hand, if there are numerous semantic relations between two words, we can
expect some of the attributes of one of the words to be inherited by the other.
Motivated by this close connection between attributes and relations, given a
relational graph in which words are inter- connected via numerous semantic
relations, we propose a method to learn a latent representation for the
individual words. The proposed method considers not only the co-occurrences of
words as done by existing approaches for word representation learning, but also
the semantic relations in which two words co-occur. To evaluate the accuracy of
the word representations learnt using the proposed method, we use the learnt
word representations to solve semantic word analogy problems. Our experimental
results show that it is possible to learn better word representations by using
semantic semantics between words.Comment: AAAI 201
The Latent Relation Mapping Engine: Algorithm and Experiments
Many AI researchers and cognitive scientists have argued that analogy is the
core of cognition. The most influential work on computational modeling of
analogy-making is Structure Mapping Theory (SMT) and its implementation in the
Structure Mapping Engine (SME). A limitation of SME is the requirement for
complex hand-coded representations. We introduce the Latent Relation Mapping
Engine (LRME), which combines ideas from SME and Latent Relational Analysis
(LRA) in order to remove the requirement for hand-coded representations. LRME
builds analogical mappings between lists of words, using a large corpus of raw
text to automatically discover the semantic relations among the words. We
evaluate LRME on a set of twenty analogical mapping problems, ten based on
scientific analogies and ten based on common metaphors. LRME achieves
human-level performance on the twenty problems. We compare LRME with a variety
of alternative approaches and find that they are not able to reach the same
level of performance.Comment: related work available at http://purl.org/peter.turney
- …