986 research outputs found
Latent Tree Learning with Differentiable Parsers: Shift-Reduce Parsing and Chart Parsing
Latent tree learning models represent sentences by composing their words
according to an induced parse tree, all based on a downstream task. These
models often outperform baselines which use (externally provided) syntax trees
to drive the composition order. This work contributes (a) a new latent tree
learning model based on shift-reduce parsing, with competitive downstream
performance and non-trivial induced trees, and (b) an analysis of the trees
learned by our shift-reduce model and by a chart-based model.Comment: ACL 2018 workshop on Relevance of Linguistic Structure in Neural
Architectures for NL
Stochastic Attribute-Value Grammars
Probabilistic analogues of regular and context-free grammars are well-known
in computational linguistics, and currently the subject of intensive research.
To date, however, no satisfactory probabilistic analogue of attribute-value
grammars has been proposed: previous attempts have failed to define a correct
parameter-estimation algorithm.
In the present paper, I define stochastic attribute-value grammars and give a
correct algorithm for estimating their parameters. The estimation algorithm is
adapted from Della Pietra, Della Pietra, and Lafferty (1995). To estimate model
parameters, it is necessary to compute the expectations of certain functions
under random fields. In the application discussed by Della Pietra, Della
Pietra, and Lafferty (representing English orthographic constraints), Gibbs
sampling can be used to estimate the needed expectations. The fact that
attribute-value grammars generate constrained languages makes Gibbs sampling
inapplicable, but I show how a variant of Gibbs sampling, the
Metropolis-Hastings algorithm, can be used instead.Comment: 23 pages, 21 Postscript figures, uses rotate.st
An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities
We describe an extension of Earley's parser for stochastic context-free
grammars that computes the following quantities given a stochastic context-free
grammar and an input string: a) probabilities of successive prefixes being
generated by the grammar; b) probabilities of substrings being generated by the
nonterminals, including the entire string being generated by the grammar; c)
most likely (Viterbi) parse of the string; d) posterior expected number of
applications of each grammar production, as required for reestimating rule
probabilities. (a) and (b) are computed incrementally in a single left-to-right
pass over the input. Our algorithm compares favorably to standard bottom-up
parsing methods for SCFGs in that it works efficiently on sparse grammars by
making use of Earley's top-down control structure. It can process any
context-free rule format without conversion to some normal form, and combines
computations for (a) through (d) in a single algorithm. Finally, the algorithm
has simple extensions for processing partially bracketed inputs, and for
finding partial parses and their likelihoods on ungrammatical inputs.Comment: 45 pages. Slightly shortened version to appear in Computational
Linguistics 2
Polynomial Time Algorithms for Multi-Type Branching Processes and Stochastic Context-Free Grammars
We show that one can approximate the least fixed point solution for a
multivariate system of monotone probabilistic polynomial equations in time
polynomial in both the encoding size of the system of equations and in
log(1/\epsilon), where \epsilon > 0 is the desired additive error bound of the
solution. (The model of computation is the standard Turing machine model.)
We use this result to resolve several open problems regarding the
computational complexity of computing key quantities associated with some
classic and heavily studied stochastic processes, including multi-type
branching processes and stochastic context-free grammars
Probabilistic Constraint Logic Programming
This paper addresses two central problems for probabilistic processing
models: parameter estimation from incomplete data and efficient retrieval of
most probable analyses. These questions have been answered satisfactorily only
for probabilistic regular and context-free models. We address these problems
for a more expressive probabilistic constraint logic programming model. We
present a log-linear probability model for probabilistic constraint logic
programming. On top of this model we define an algorithm to estimate the
parameters and to select the properties of log-linear models from incomplete
data. This algorithm is an extension of the improved iterative scaling
algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm
applies to log-linear models in general and is accompanied with suitable
approximation methods when applied to large data spaces. Furthermore, we
present an approach for searching for most probable analyses of the
probabilistic constraint logic programming model. This method can be applied to
the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl
- ā¦