30,578 research outputs found
Improving variational methods via pairwise linear response identities
nference methods are often formulated as variational approximations: these approxima-tions allow easy evaluation of statistics by marginalization or linear response, but theseestimates can be inconsistent. We show that by introducing constraints on covariance, onecan ensure consistency of linear response with the variational parameters, and in so doinginference of marginal probability distributions is improved. For the Bethe approximationand its generalizations, improvements are achieved with simple choices of the constraints.The approximations are presented as variational frameworks; iterative procedures relatedto message passing are provided for finding the minim
Quantitative Approximation of the Probability Distribution of a Markov Process by Formal Abstractions
The goal of this work is to formally abstract a Markov process evolving in
discrete time over a general state space as a finite-state Markov chain, with
the objective of precisely approximating its state probability distribution in
time, which allows for its approximate, faster computation by that of the
Markov chain. The approach is based on formal abstractions and employs an
arbitrary finite partition of the state space of the Markov process, and the
computation of average transition probabilities between partition sets. The
abstraction technique is formal, in that it comes with guarantees on the
introduced approximation that depend on the diameters of the partitions: as
such, they can be tuned at will. Further in the case of Markov processes with
unbounded state spaces, a procedure for precisely truncating the state space
within a compact set is provided, together with an error bound that depends on
the asymptotic properties of the transition kernel of the original process. The
overall abstraction algorithm, which practically hinges on piecewise constant
approximations of the density functions of the Markov process, is extended to
higher-order function approximations: these can lead to improved error bounds
and associated lower computational requirements. The approach is practically
tested to compute probabilistic invariance of the Markov process under study,
and is compared to a known alternative approach from the literature.Comment: 29 pages, Journal of Logical Methods in Computer Scienc
Algorithms for Kullback-Leibler Approximation of Probability Measures in Infinite Dimensions
In this paper we study algorithms to find a Gaussian approximation to a
target measure defined on a Hilbert space of functions; the target measure
itself is defined via its density with respect to a reference Gaussian measure.
We employ the Kullback-Leibler divergence as a distance and find the best
Gaussian approximation by minimizing this distance. It then follows that the
approximate Gaussian must be equivalent to the Gaussian reference measure,
defining a natural function space setting for the underlying calculus of
variations problem. We introduce a computational algorithm which is
well-adapted to the required minimization, seeking to find the mean as a
function, and parameterizing the covariance in two different ways: through low
rank perturbations of the reference covariance; and through Schr\"odinger
potential perturbations of the inverse reference covariance. Two applications
are shown: to a nonlinear inverse problem in elliptic PDEs, and to a
conditioned diffusion process. We also show how the Gaussian approximations we
obtain may be used to produce improved pCN-MCMC methods which are not only
well-adapted to the high-dimensional setting, but also behave well with respect
to small observational noise (resp. small temperatures) in the inverse problem
(resp. conditioned diffusion).Comment: 28 page
Calculation of aggregate loss distributions
Estimation of the operational risk capital under the Loss Distribution
Approach requires evaluation of aggregate (compound) loss distributions which
is one of the classic problems in risk theory. Closed-form solutions are not
available for the distributions typically used in operational risk. However
with modern computer processing power, these distributions can be calculated
virtually exactly using numerical methods. This paper reviews numerical
algorithms that can be successfully used to calculate the aggregate loss
distributions. In particular Monte Carlo, Panjer recursion and Fourier
transformation methods are presented and compared. Also, several closed-form
approximations based on moment matching and asymptotic result for heavy-tailed
distributions are reviewed
Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness
Polynomial approximations to boolean functions have led to many positive
results in computer science. In particular, polynomial approximations to the
sign function underly algorithms for agnostically learning halfspaces, as well
as pseudorandom generators for halfspaces. In this work, we investigate the
limits of these techniques by proving inapproximability results for the sign
function.
Firstly, the polynomial regression algorithm of Kalai et al. (SIAM J. Comput.
2008) shows that halfspaces can be learned with respect to log-concave
distributions on in the challenging agnostic learning model. The
power of this algorithm relies on the fact that under log-concave
distributions, halfspaces can be approximated arbitrarily well by low-degree
polynomials. We ask whether this technique can be extended beyond log-concave
distributions, and establish a negative result. We show that polynomials of any
degree cannot approximate the sign function to within arbitrarily low error for
a large class of non-log-concave distributions on the real line, including
those with densities proportional to .
Secondly, we investigate the derandomization of Chernoff-type concentration
inequalities. Chernoff-type tail bounds on sums of independent random variables
have pervasive applications in theoretical computer science. Schmidt et al.
(SIAM J. Discrete Math. 1995) showed that these inequalities can be established
for sums of random variables with only -wise independence,
for a tail probability of . We show that their results are tight up to
constant factors.
These results rely on techniques from weighted approximation theory, which
studies how well functions on the real line can be approximated by polynomials
under various distributions. We believe that these techniques will have further
applications in other areas of computer science.Comment: 22 page
- …