15,262 research outputs found

    Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions

    Get PDF
    Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract)Series: Preprint Series / Department of Applied Statistics and Data Processin

    Uniform sampling of steady states in metabolic networks: heterogeneous scales and rounding

    Get PDF
    The uniform sampling of convex polytopes is an interesting computational problem with many applications in inference from linear constraints, but the performances of sampling algorithms can be affected by ill-conditioning. This is the case of inferring the feasible steady states in models of metabolic networks, since they can show heterogeneous time scales . In this work we focus on rounding procedures based on building an ellipsoid that closely matches the sampling space, that can be used to define an efficient hit-and-run (HR) Markov Chain Monte Carlo. In this way the uniformity of the sampling of the convex space of interest is rigorously guaranteed, at odds with non markovian methods. We analyze and compare three rounding methods in order to sample the feasible steady states of metabolic networks of three models of growing size up to genomic scale. The first is based on principal component analysis (PCA), the second on linear programming (LP) and finally we employ the lovasz ellipsoid method (LEM). Our results show that a rounding procedure is mandatory for the application of the HR in these inference problem and suggest that a combination of LEM or LP with a subsequent PCA perform the best. We finally compare the distributions of the HR with that of two heuristics based on the Artificially Centered hit-and-run (ACHR), gpSampler and optGpSampler. They show a good agreement with the results of the HR for the small network, while on genome scale models present inconsistencies.Comment: Replacement with major revision

    Variable Metric Random Pursuit

    Full text link
    We consider unconstrained randomized optimization of smooth convex objective functions in the gradient-free setting. We analyze Random Pursuit (RP) algorithms with fixed (F-RP) and variable metric (V-RP). The algorithms only use zeroth-order information about the objective function and compute an approximate solution by repeated optimization over randomly chosen one-dimensional subspaces. The distribution of search directions is dictated by the chosen metric. Variable Metric RP uses novel variants of a randomized zeroth-order Hessian approximation scheme recently introduced by Leventhal and Lewis (D. Leventhal and A. S. Lewis., Optimization 60(3), 329--245, 2011). We here present (i) a refined analysis of the expected single step progress of RP algorithms and their global convergence on (strictly) convex functions and (ii) novel convergence bounds for V-RP on strongly convex functions. We also quantify how well the employed metric needs to match the local geometry of the function in order for the RP algorithms to converge with the best possible rate. Our theoretical results are accompanied by numerical experiments, comparing V-RP with the derivative-free schemes CMA-ES, Implicit Filtering, Nelder-Mead, NEWUOA, Pattern-Search and Nesterov's gradient-free algorithms.Comment: 42 pages, 6 figures, 15 tables, submitted to journal, Version 3: majorly revised second part, i.e. Section 5 and Appendi

    Radiative transfer on hierarchial grids

    Full text link
    We present new methods for radiative transfer on hierarchial grids. We develop a new method for calculating the scattered flux that employs the grid structure to speed up the computation. We describe a novel subiteration algorithm that can be used to accelerate calculations with strong dust temperature self-coupling. We compute two test models, a molecular cloud and a circumstellar disc, and compare the accuracy and speed of the new algorithms against existing methods. An adaptive model of the molecular cloud with less than 8 % of the cells in the uniform grid produced results in good agreement with the full resolution model. The relative RMS error of the surface brightness <4 % at all wavelengths, and in regions of high column density the relative RMS error was only 10^{-4}. Computation with the adaptive model was faster by a factor of ~5. The new method for calculating the scattered flux is faster by a factor of ~4 in large models with a deep hierarchy structure, when images of the scattered light are computed towards several observing directions. The efficiency of the subiteration algorithm is highly dependent on the details of the model. In the circumstellar disc test the speed-up was a factor of two, but much larger gains are possible. The algorithm is expected to be most beneficial in models where a large number of small, dense regions are embedded in an environment with a lower mean density.Comment: Accepted to A&A; 13 pages, 8 figures; (v2: minor typos corrected

    Determination of the chemical potential using energy-biased sampling

    Full text link
    An energy-biased method to evaluate ensemble averages requiring test-particle insertion is presented. The method is based on biasing the sampling within the subdomains of the test-particle configurational space with energies smaller than a given value freely assigned. These energy-wells are located via unbiased random insertion over the whole configurational space and are sampled using the so called Hit&Run algorithm, which uniformly samples compact regions of any shape immersed in a space of arbitrary dimensions. Because the bias is defined in terms of the energy landscape it can be exactly corrected to obtain the unbiased distribution. The test-particle energy distribution is then combined with the Bennett relation for the evaluation of the chemical potential. We apply this protocol to a system with relatively small probability of low-energy test-particle insertion, liquid argon at high density and low temperature, and show that the energy-biased Bennett method is around five times more efficient than the standard Bennett method. A similar performance gain is observed in the reconstruction of the energy distribution.Comment: 10 pages, 4 figure
    • …
    corecore