1,461 research outputs found
Ensemble Transport Adaptive Importance Sampling
Markov chain Monte Carlo methods are a powerful and commonly used family of
numerical methods for sampling from complex probability distributions. As
applications of these methods increase in size and complexity, the need for
efficient methods increases. In this paper, we present a particle ensemble
algorithm. At each iteration, an importance sampling proposal distribution is
formed using an ensemble of particles. A stratified sample is taken from this
distribution and weighted under the posterior, a state-of-the-art ensemble
transport resampling method is then used to create an evenly weighted sample
ready for the next iteration. We demonstrate that this ensemble transport
adaptive importance sampling (ETAIS) method outperforms MCMC methods with
equivalent proposal distributions for low dimensional problems, and in fact
shows better than linear improvements in convergence rates with respect to the
number of ensemble members. We also introduce a new resampling strategy,
multinomial transformation (MT), which while not as accurate as the ensemble
transport resampler, is substantially less costly for large ensemble sizes, and
can then be used in conjunction with ETAIS for complex problems. We also focus
on how algorithmic parameters regarding the mixture proposal can be quickly
tuned to optimise performance. In particular, we demonstrate this methodology's
superior sampling for multimodal problems, such as those arising from inference
for mixture models, and for problems with expensive likelihoods requiring the
solution of a differential equation, for which speed-ups of orders of magnitude
are demonstrated. Likelihood evaluations of the ensemble could be computed in a
distributed manner, suggesting that this methodology is a good candidate for
parallel Bayesian computations
Quantile estimation with adaptive importance sampling
We introduce new quantile estimators with adaptive importance sampling. The
adaptive estimators are based on weighted samples that are neither independent
nor identically distributed. Using a new law of iterated logarithm for
martingales, we prove the convergence of the adaptive quantile estimators for
general distributions with nonunique quantiles thereby extending the work of
Feldman and Tucker [Ann. Math. Statist. 37 (1996) 451--457]. We illustrate the
algorithm with an example from credit portfolio risk analysis.Comment: Published in at http://dx.doi.org/10.1214/09-AOS745 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Faster Coordinate Descent via Adaptive Importance Sampling
Coordinate descent methods employ random partial updates of decision
variables in order to solve huge-scale convex optimization problems. In this
work, we introduce new adaptive rules for the random selection of their
updates. By adaptive, we mean that our selection rules are based on the dual
residual or the primal-dual gap estimates and can change at each iteration. We
theoretically characterize the performance of our selection rules and
demonstrate improvements over the state-of-the-art, and extend our theory and
algorithms to general convex objectives. Numerical evidence with hinge-loss
support vector machines and Lasso confirm that the practice follows the theory.Comment: appearing at AISTATS 201
Adaptive Importance Sampling in General Mixture Classes
In this paper, we propose an adaptive algorithm that iteratively updates both
the weights and component parameters of a mixture importance sampling density
so as to optimise the importance sampling performances, as measured by an
entropy criterion. The method is shown to be applicable to a wide class of
importance sampling densities, which includes in particular mixtures of
multivariate Student t distributions. The performances of the proposed scheme
are studied on both artificial and real examples, highlighting in particular
the benefit of a novel Rao-Blackwellisation device which can be easily
incorporated in the updating scheme.Comment: Removed misleading comment in Section
Robust Covariance Adaptation in Adaptive Importance Sampling
Importance sampling (IS) is a Monte Carlo methodology that allows for
approximation of a target distribution using weighted samples generated from
another proposal distribution. Adaptive importance sampling (AIS) implements an
iterative version of IS which adapts the parameters of the proposal
distribution in order to improve estimation of the target. While the adaptation
of the location (mean) of the proposals has been largely studied, an important
challenge of AIS relates to the difficulty of adapting the scale parameter
(covariance matrix). In the case of weight degeneracy, adapting the covariance
matrix using the empirical covariance results in a singular matrix, which leads
to poor performance in subsequent iterations of the algorithm. In this paper,
we propose a novel scheme which exploits recent advances in the IS literature
to prevent the so-called weight degeneracy. The method efficiently adapts the
covariance matrix of a population of proposal distributions and achieves a
significant performance improvement in high-dimensional scenarios. We validate
the new method through computer simulations
- âŠ