186 research outputs found
An Alternative Conception of Tree-Adjoining Derivation
The precise formulation of derivation for tree-adjoining grammars has
important ramifications for a wide variety of uses of the formalism, from
syntactic analysis to semantic interpretation and statistical language
modeling. We argue that the definition of tree-adjoining derivation must be
reformulated in order to manifest the proper linguistic dependencies in
derivations. The particular proposal is both precisely characterizable through
a definition of TAG derivations as equivalence classes of ordered derivation
trees, and computationally operational, by virtue of a compilation to linear
indexed grammars together with an efficient algorithm for recognition and
parsing according to the compiled grammar.Comment: 33 page
Global Thresholding and Multiple Pass Parsing
We present a variation on classic beam thresholding techniques that is up to
an order of magnitude faster than the traditional method, at the same
performance level. We also present a new thresholding technique, global
thresholding, which, combined with the new beam thresholding, gives an
additional factor of two improvement, and a novel technique, multiple pass
parsing, that can be combined with the others to yield yet another 50%
improvement. We use a new search algorithm to simultaneously optimize the
thresholding parameters of the various algorithms.Comment: Fixed latex errors; fixed minor errors in published versio
Probabilistic Constraint Logic Programming
This paper addresses two central problems for probabilistic processing
models: parameter estimation from incomplete data and efficient retrieval of
most probable analyses. These questions have been answered satisfactorily only
for probabilistic regular and context-free models. We address these problems
for a more expressive probabilistic constraint logic programming model. We
present a log-linear probability model for probabilistic constraint logic
programming. On top of this model we define an algorithm to estimate the
parameters and to select the properties of log-linear models from incomplete
data. This algorithm is an extension of the improved iterative scaling
algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm
applies to log-linear models in general and is accompanied with suitable
approximation methods when applied to large data spaces. Furthermore, we
present an approach for searching for most probable analyses of the
probabilistic constraint logic programming model. This method can be applied to
the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl
Recommended from our members
Efficiency of Top-Down Parsing of Recursive Adjunction for Tree Adjoining Grammar
CKY-type parser and Earley-type parser are two widely-used parsing algorithms for Tree Adjoining Grammar (TAG). In contrast, a standard top-down parser is not efficient since the looping problem occurs during both the left and right recursion of standard TAG derivation. Roark (2001) combines the top-down parser for CFG with a beam search, showing that the probabilistic top-down parser yields a perplexity improvement over previous results. In this paper, we define the stochastic tree adjoining grammar and apply the probabilistic top-down parser for CFG to TAG. Comparing the parsing efficiency of the standard and alternative TAG derivation of the recursive adjunction, we find that the alternative derivation is more efficient since it avoids the looping problem of the right recursion, increasing the parsing efficiency of our top-down parser
Efficient Algorithms for Parsing the DOP Model
Excellent results have been reported for Data-Oriented Parsing (DOP) of
natural language texts (Bod, 1993). Unfortunately, existing algorithms are both
computationally intensive and difficult to implement. Previous algorithms are
expensive due to two factors: the exponential number of rules that must be
generated and the use of a Monte Carlo parsing algorithm. In this paper we
solve the first problem by a novel reduction of the DOP model to a small,
equivalent probabilistic context-free grammar. We solve the second problem by a
novel deterministic parsing strategy that maximizes the expected number of
correct constituents, rather than the probability of a correct parse tree.
Using the optimizations, experiments yield a 97% crossing brackets rate and 88%
zero crossing brackets rate. This differs significantly from the results
reported by Bod, and is comparable to results from a duplication of Pereira and
Schabes's (1992) experiment on the same data. We show that Bod's results are at
least partially due to an extremely fortuitous choice of test data, and
partially due to using cleaner data than other researchers.Comment: 10 page
Parsing Inside-Out
The inside-outside probabilities are typically used for reestimating
Probabilistic Context Free Grammars (PCFGs), just as the forward-backward
probabilities are typically used for reestimating HMMs. I show several novel
uses, including improving parser accuracy by matching parsing algorithms to
evaluation criteria; speeding up DOP parsing by 500 times; and 30 times faster
PCFG thresholding at a given accuracy level. I also give an elegant,
state-of-the-art grammar formalism, which can be used to compute inside-outside
probabilities; and a parser description formalism, which makes it easy to
derive inside-outside formulas and many others.Comment: Ph.D. Thesis, 257 pages, 40 postscript figure
Fragment Grammars: Exploring Computation and Reuse in Language
Language relies on a division of labor between stored units and structure building operations which combine the stored units into larger structures. This division of labor leads to a tradeoff: more structure-building means less need to store while more storage means less need to compute structure. We develop a hierarchical Bayesian model called fragment grammar to explore the optimum balance between structure-building and reuse. The model is developed in the context of stochastic functional programming (SFP) and in particular using a probabilistic variant of Lisp known as the Church programming language (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2008). We show how to formalize several probabilistic models of language structure using Church, and how fragment grammar generalizes one of them---adaptor grammars (Johnson, Griffiths, & Goldwater, 2007). We conclude with experimental data with adults and preliminary evaluations of the model on natural language corpus data
A sequence-length sensitive approach to learning biological grammars using inductive logic programming.
This thesis aims to investigate if the ideas behind compression principles, such as the Minimum Description Length, can help us to improve the process of learning biological grammars from protein sequences using Inductive Logic Programming (ILP). Contrary to most traditional ILP learning problems, biological sequences often have a high variation in their length. This variation in length is an important feature of biological sequences which should not be ignored by ILP systems. However we have identified that some ILP systems do not take into account the length of examples when evaluating their proposed hypotheses. During the learning process, many ILP systems use clause evaluation functions to assign a score to induced hypotheses, estimating their quality and effectively influencing the search. Traditionally, clause evaluation functions do not take into account the length of the examples which are covered by the clause. We propose L-modification, a way of modifying existing clause evaluation functions so that they take into account the length of the examples which they learn from. An empirical study was undertaken to investigate if significant improvements can be achieved by applying L-modification to a standard clause evaluation function. Furthermore, we generally investigated how ILP systems cope with the length of examples in training data. We show that our L-modified clause evaluation function outperforms our benchmark function in every experiment we conducted and thus we prove that L-modification is a useful concept. We also show that the length of the examples in the training data used by ILP systems does have an undeniable impact on the results
- ā¦