5,042 research outputs found
Efficient Algorithms for Parsing the DOP Model
Excellent results have been reported for Data-Oriented Parsing (DOP) of
natural language texts (Bod, 1993). Unfortunately, existing algorithms are both
computationally intensive and difficult to implement. Previous algorithms are
expensive due to two factors: the exponential number of rules that must be
generated and the use of a Monte Carlo parsing algorithm. In this paper we
solve the first problem by a novel reduction of the DOP model to a small,
equivalent probabilistic context-free grammar. We solve the second problem by a
novel deterministic parsing strategy that maximizes the expected number of
correct constituents, rather than the probability of a correct parse tree.
Using the optimizations, experiments yield a 97% crossing brackets rate and 88%
zero crossing brackets rate. This differs significantly from the results
reported by Bod, and is comparable to results from a duplication of Pereira and
Schabes's (1992) experiment on the same data. We show that Bod's results are at
least partially due to an extremely fortuitous choice of test data, and
partially due to using cleaner data than other researchers.Comment: 10 page
Probabilistic Constraint Logic Programming
This paper addresses two central problems for probabilistic processing
models: parameter estimation from incomplete data and efficient retrieval of
most probable analyses. These questions have been answered satisfactorily only
for probabilistic regular and context-free models. We address these problems
for a more expressive probabilistic constraint logic programming model. We
present a log-linear probability model for probabilistic constraint logic
programming. On top of this model we define an algorithm to estimate the
parameters and to select the properties of log-linear models from incomplete
data. This algorithm is an extension of the improved iterative scaling
algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm
applies to log-linear models in general and is accompanied with suitable
approximation methods when applied to large data spaces. Furthermore, we
present an approach for searching for most probable analyses of the
probabilistic constraint logic programming model. This method can be applied to
the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl
Three Generative, Lexicalised Models for Statistical Parsing
In this paper we first propose a new statistical parsing model, which is a
generative model of lexicalised context-free grammar. We then extend the model
to include a probabilistic treatment of both subcategorisation and wh-movement.
Results on Wall Street Journal text show that the parser performs at 88.1/87.5%
constituent precision/recall, an average improvement of 2.3% over (Collins 96).Comment: 8 pages, to appear in Proceedings of ACL/EACL 97
Generalizing input-driven languages: theoretical and practical benefits
Regular languages (RL) are the simplest family in Chomsky's hierarchy. Thanks
to their simplicity they enjoy various nice algebraic and logic properties that
have been successfully exploited in many application fields. Practically all of
their related problems are decidable, so that they support automatic
verification algorithms. Also, they can be recognized in real-time.
Context-free languages (CFL) are another major family well-suited to
formalize programming, natural, and many other classes of languages; their
increased generative power w.r.t. RL, however, causes the loss of several
closure properties and of the decidability of important problems; furthermore
they need complex parsing algorithms. Thus, various subclasses thereof have
been defined with different goals, spanning from efficient, deterministic
parsing to closure properties, logic characterization and automatic
verification techniques.
Among CFL subclasses, so-called structured ones, i.e., those where the
typical tree-structure is visible in the sentences, exhibit many of the
algebraic and logic properties of RL, whereas deterministic CFL have been
thoroughly exploited in compiler construction and other application fields.
After surveying and comparing the main properties of those various language
families, we go back to operator precedence languages (OPL), an old family
through which R. Floyd pioneered deterministic parsing, and we show that they
offer unexpected properties in two fields so far investigated in totally
independent ways: they enable parsing parallelization in a more effective way
than traditional sequential parsers, and exhibit the same algebraic and logic
properties so far obtained only for less expressive language families
- ā¦