15,813 research outputs found
Probabilistic Constraint Logic Programming
This paper addresses two central problems for probabilistic processing
models: parameter estimation from incomplete data and efficient retrieval of
most probable analyses. These questions have been answered satisfactorily only
for probabilistic regular and context-free models. We address these problems
for a more expressive probabilistic constraint logic programming model. We
present a log-linear probability model for probabilistic constraint logic
programming. On top of this model we define an algorithm to estimate the
parameters and to select the properties of log-linear models from incomplete
data. This algorithm is an extension of the improved iterative scaling
algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm
applies to log-linear models in general and is accompanied with suitable
approximation methods when applied to large data spaces. Furthermore, we
present an approach for searching for most probable analyses of the
probabilistic constraint logic programming model. This method can be applied to
the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl
Herding as a Learning System with Edge-of-Chaos Dynamics
Herding defines a deterministic dynamical system at the edge of chaos. It
generates a sequence of model states and parameters by alternating parameter
perturbations with state maximizations, where the sequence of states can be
interpreted as "samples" from an associated MRF model. Herding differs from
maximum likelihood estimation in that the sequence of parameters does not
converge to a fixed point and differs from an MCMC posterior sampling approach
in that the sequence of states is generated deterministically. Herding may be
interpreted as a"perturb and map" method where the parameter perturbations are
generated using a deterministic nonlinear dynamical system rather than randomly
from a Gumbel distribution. This chapter studies the distinct statistical
characteristics of the herding algorithm and shows that the fast convergence
rate of the controlled moments may be attributed to edge of chaos dynamics. The
herding algorithm can also be generalized to models with latent variables and
to a discriminative learning setting. The perceptron cycling theorem ensures
that the fast moment matching property is preserved in the more general
framework
Robust State Space Filtering under Incremental Model Perturbations Subject to a Relative Entropy Tolerance
This paper considers robust filtering for a nominal Gaussian state-space
model, when a relative entropy tolerance is applied to each time increment of a
dynamical model. The problem is formulated as a dynamic minimax game where the
maximizer adopts a myopic strategy. This game is shown to admit a saddle point
whose structure is characterized by applying and extending results presented
earlier in [1] for static least-squares estimation. The resulting minimax
filter takes the form of a risk-sensitive filter with a time varying risk
sensitivity parameter, which depends on the tolerance bound applied to the
model dynamics and observations at the corresponding time index. The
least-favorable model is constructed and used to evaluate the performance of
alternative filters. Simulations comparing the proposed risk-sensitive filter
to a standard Kalman filter show a significant performance advantage when
applied to the least-favorable model, and only a small performance loss for the
nominal model
- …