1,103 research outputs found
Forgetting Exceptions is Harmful in Language Learning
We show that in language learning, contrary to received wisdom, keeping
exceptional training instances in memory can be beneficial for generalization
accuracy. We investigate this phenomenon empirically on a selection of
benchmark natural language processing tasks: grapheme-to-phoneme conversion,
part-of-speech tagging, prepositional-phrase attachment, and base noun phrase
chunking. In a first series of experiments we combine memory-based learning
with training set editing techniques, in which instances are edited based on
their typicality and class prediction strength. Results show that editing
exceptional instances (with low typicality or low class prediction strength)
tends to harm generalization accuracy. In a second series of experiments we
compare memory-based learning and decision-tree learning methods on the same
selection of tasks, and find that decision-tree learning often performs worse
than memory-based learning. Moreover, the decrease in performance can be linked
to the degree of abstraction from exceptions (i.e., pruning or eagerness). We
provide explanations for both results in terms of the properties of the natural
language processing tasks and the learning algorithms.Comment: 31 pages, 7 figures, 10 tables. uses 11pt, fullname, a4wide tex
styles. Pre-print version of article to appear in Machine Learning 11:1-3,
Special Issue on Natural Language Learning. Figures on page 22 slightly
compressed to avoid page overloa
Pure Nash Equilibria: Hard and Easy Games
We investigate complexity issues related to pure Nash equilibria of strategic
games. We show that, even in very restrictive settings, determining whether a
game has a pure Nash Equilibrium is NP-hard, while deciding whether a game has
a strong Nash equilibrium is SigmaP2-complete. We then study practically
relevant restrictions that lower the complexity. In particular, we are
interested in quantitative and qualitative restrictions of the way each players
payoff depends on moves of other players. We say that a game has small
neighborhood if the utility function for each player depends only on (the
actions of) a logarithmically small number of other players. The dependency
structure of a game G can be expressed by a graph DG(G) or by a hypergraph
H(G). By relating Nash equilibrium problems to constraint satisfaction problems
(CSPs), we show that if G has small neighborhood and if H(G) has bounded
hypertree width (or if DG(G) has bounded treewidth), then finding pure Nash and
Pareto equilibria is feasible in polynomial time. If the game is graphical,
then these problems are LOGCFL-complete and thus in the class NC2 of highly
parallelizable problems
Applying Winnow to Context-Sensitive Spelling Correction
Multiplicative weight-updating algorithms such as Winnow have been studied
extensively in the COLT literature, but only recently have people started to
use them in applications. In this paper, we apply a Winnow-based algorithm to a
task in natural language: context-sensitive spelling correction. This is the
task of fixing spelling errors that happen to result in valid words, such as
substituting {\it to\/} for {\it too}, {\it casual\/} for {\it causal}, and so
on. Previous approaches to this problem have been statistics-based; we compare
Winnow to one of the more successful such approaches, which uses Bayesian
classifiers. We find that: (1)~When the standard (heavily-pruned) set of
features is used to describe problem instances, Winnow performs comparably to
the Bayesian method; (2)~When the full (unpruned) set of features is used,
Winnow is able to exploit the new features and convincingly outperform Bayes;
and (3)~When a test set is encountered that is dissimilar to the training set,
Winnow is better than Bayes at adapting to the unfamiliar test set, using a
strategy we will present for combining learning on the training set with
unsupervised learning on the (noisy) test set.Comment: 9 page
Prediction in Financial Markets: The Case for Small Disjuncts
Predictive models in regression and classification problems typically
have a single model that covers most, if not all, cases in the data. At
the opposite end of the spectrum is a collection of models each of which
covers a very small subset of the decision space. These are referred to
as “small disjuncts.” The tradeoffs between the two types of
models have been well documented. Single models, especially linear ones,
are easy to interpret and explain. In contrast, small disjuncts do not
provide as clean or as simple an interpretation of the data, and have
been shown by several researchers to be responsible for a
disproportionately large number of errors when applied to out of sample
data. This research provides a counterpoint, demonstrating that
“simple” small disjuncts provide a credible model for
financial market prediction, a problem with a high degree of noise. A
related novel contribution of this paper is a simple method for
measuring the “yield” of a learning system, which is the
percentage of in sample performance that the learned model can be
expected to realize on out-of-sample data. Curiously, such a measure is
missing from the literature on regression learning algorithms.NYU Stern School of Busines
Learning When Training Data are Costly: The Effect of Class Distribution on Tree Induction
For large, real-world inductive learning problems, the number of training
examples often must be limited due to the costs associated with procuring,
preparing, and storing the training examples and/or the computational costs
associated with learning from them. In such circumstances, one question of
practical importance is: if only n training examples can be selected, in what
proportion should the classes be represented? In this article we help to answer
this question by analyzing, for a fixed training-set size, the relationship
between the class distribution of the training data and the performance of
classification trees induced from these data. We study twenty-six data sets
and, for each, determine the best class distribution for learning. The
naturally occurring class distribution is shown to generally perform well when
classifier performance is evaluated using undifferentiated error rate (0/1
loss). However, when the area under the ROC curve is used to evaluate
classifier performance, a balanced distribution is shown to perform well. Since
neither of these choices for class distribution always generates the
best-performing classifier, we introduce a budget-sensitive progressive
sampling algorithm for selecting training examples based on the class
associated with each example. An empirical analysis of this algorithm shows
that the class distribution of the resulting training set yields classifiers
with good (nearly-optimal) classification performance
A performance comparison of oversampling methods for data generation in imbalanced learning tasks
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Marketing Research e CRMClass Imbalance problem is one of the most fundamental challenges faced by the machine learning community. The imbalance refers to number of instances in the class of interest being relatively low, as compared to the rest of the data. Sampling is a common technique for dealing with this problem. A number of over - sampling approaches have been applied in an attempt to balance the classes. This study provides an overview of the issue of class imbalance and attempts to examine some common oversampling approaches for dealing with this problem. In order to illustrate the differences, an experiment is conducted using multiple simulated data sets for comparing the performance of these oversampling methods on different classifiers based on various evaluation criteria. In addition, the effect of different parameters, such as number of features and imbalance ratio, on the classifier performance is also evaluated
Automated Reasoning with Epistemic Graphs Using SAT Solvers
Epistemic graphs have been developed for modelling an agent's degree of belief in an argument and how belief in one argument may influence the belief in other arguments. These beliefs are represented by constraints on probability distributions. In this paper, we present a framework for reasoning with epistemic graphs that allows for beliefs for individual arguments to be determined given beliefs in some of the other arguments. We present and evaluate algorithms based on SAT solvers
- …