58,146 research outputs found
Acquiring Word-Meaning Mappings for Natural Language Interfaces
This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted
Examples), that acquires a semantic lexicon from a corpus of sentences paired
with semantic representations. The lexicon learned consists of phrases paired
with meaning representations. WOLFIE is part of an integrated system that
learns to transform sentences into representations such as logical database
queries. Experimental results are presented demonstrating WOLFIE's ability to
learn useful lexicons for a database interface in four different natural
languages. The usefulness of the lexicons learned by WOLFIE are compared to
those acquired by a similar system, with results favorable to WOLFIE. A second
set of experiments demonstrates WOLFIE's ability to scale to larger and more
difficult, albeit artificially generated, corpora. In natural language
acquisition, it is difficult to gather the annotated data needed for supervised
learning; however, unannotated data is fairly plentiful. Active learning
methods attempt to select for annotation and training only the most informative
examples, and therefore are potentially very useful in natural language
applications. However, most results to date for active learning have only
considered standard classification tasks. To reduce annotation effort while
maintaining accuracy, we apply active learning to semantic lexicons. We show
that active learning can significantly reduce the number of annotated examples
required to achieve a given level of performance
Finding Non-overlapping Clusters for Generalized Inference Over Graphical Models
Graphical models use graphs to compactly capture stochastic dependencies
amongst a collection of random variables. Inference over graphical models
corresponds to finding marginal probability distributions given joint
probability distributions. In general, this is computationally intractable,
which has led to a quest for finding efficient approximate inference
algorithms. We propose a framework for generalized inference over graphical
models that can be used as a wrapper for improving the estimates of approximate
inference algorithms. Instead of applying an inference algorithm to the
original graph, we apply the inference algorithm to a block-graph, defined as a
graph in which the nodes are non-overlapping clusters of nodes from the
original graph. This results in marginal estimates of a cluster of nodes, which
we further marginalize to get the marginal estimates of each node. Our proposed
block-graph construction algorithm is simple, efficient, and motivated by the
observation that approximate inference is more accurate on graphs with longer
cycles. We present extensive numerical simulations that illustrate our
block-graph framework with a variety of inference algorithms (e.g., those in
the libDAI software package). These simulations show the improvements provided
by our framework.Comment: Extended the previous version to include extensive numerical
simulations. See http://www.ima.umn.edu/~dvats/GeneralizedInference.html for
code and dat
- …