10,699 research outputs found
Non-Reversible Parallel Tempering: a Scalable Highly Parallel MCMC Scheme
Parallel tempering (PT) methods are a popular class of Markov chain Monte
Carlo schemes used to sample complex high-dimensional probability
distributions. They rely on a collection of interacting auxiliary chains
targeting tempered versions of the target distribution to improve the
exploration of the state-space. We provide here a new perspective on these
highly parallel algorithms and their tuning by identifying and formalizing a
sharp divide in the behaviour and performance of reversible versus
non-reversible PT schemes. We show theoretically and empirically that a class
of non-reversible PT methods dominates its reversible counterparts and identify
distinct scaling limits for the non-reversible and reversible schemes, the
former being a piecewise-deterministic Markov process and the latter a
diffusion. These results are exploited to identify the optimal annealing
schedule for non-reversible PT and to develop an iterative scheme approximating
this schedule. We provide a wide range of numerical examples supporting our
theoretical and methodological contributions. The proposed methodology is
applicable to sample from a distribution with a density with respect
to a reference distribution and compute the normalizing constant. A
typical use case is when is a prior distribution, a likelihood
function and the corresponding posterior.Comment: 74 pages, 30 figures. The method is implemented in an open source
probabilistic programming available at
https://github.com/UBC-Stat-ML/blangSD
Controlled Sequential Monte Carlo
Sequential Monte Carlo methods, also known as particle methods, are a popular
set of techniques for approximating high-dimensional probability distributions
and their normalizing constants. These methods have found numerous applications
in statistics and related fields; e.g. for inference in non-linear non-Gaussian
state space models, and in complex static models. Like many Monte Carlo
sampling schemes, they rely on proposal distributions which crucially impact
their performance. We introduce here a class of controlled sequential Monte
Carlo algorithms, where the proposal distributions are determined by
approximating the solution to an associated optimal control problem using an
iterative scheme. This method builds upon a number of existing algorithms in
econometrics, physics, and statistics for inference in state space models, and
generalizes these methods so as to accommodate complex static models. We
provide a theoretical analysis concerning the fluctuation and stability of this
methodology that also provides insight into the properties of related
algorithms. We demonstrate significant gains over state-of-the-art methods at a
fixed computational complexity on a variety of applications
A Survey on Compiler Autotuning using Machine Learning
Since the mid-1990s, researchers have been trying to use machine-learning
based approaches to solve a number of different compiler optimization problems.
These techniques primarily enhance the quality of the obtained results and,
more importantly, make it feasible to tackle two main compiler optimization
problems: optimization selection (choosing which optimizations to apply) and
phase-ordering (choosing the order of applying optimizations). The compiler
optimization space continues to grow due to the advancement of applications,
increasing number of compiler optimizations, and new target architectures.
Generic optimization passes in compilers cannot fully leverage newly introduced
optimizations and, therefore, cannot keep up with the pace of increasing
options. This survey summarizes and classifies the recent advances in using
machine learning for the compiler optimization field, particularly on the two
major problems of (1) selecting the best optimizations and (2) the
phase-ordering of optimizations. The survey highlights the approaches taken so
far, the obtained results, the fine-grain classification among different
approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our
Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated
quarterly here (Send me your new published papers to be added in the
subsequent version) History: Received November 2016; Revised August 2017;
Revised February 2018; Accepted March 2018
Optimization of Discrete-parameter Multiprocessor Systems using a Novel Ergodic Interpolation Technique
Modern multi-core systems have a large number of design parameters, most of
which are discrete-valued, and this number is likely to keep increasing as chip
complexity rises. Further, the accurate evaluation of a potential design choice
is computationally expensive because it requires detailed cycle-accurate system
simulation. If the discrete parameter space can be embedded into a larger
continuous parameter space, then continuous space techniques can, in principle,
be applied to the system optimization problem. Such continuous space techniques
often scale well with the number of parameters.
We propose a novel technique for embedding the discrete parameter space into
an extended continuous space so that continuous space techniques can be applied
to the embedded problem using cycle accurate simulation for evaluating the
objective function. This embedding is implemented using simulation-based
ergodic interpolation, which, unlike spatial interpolation, produces the
interpolated value within a single simulation run irrespective of the number of
parameters. We have implemented this interpolation scheme in a cycle-based
system simulator. In a characterization study, we observe that the interpolated
performance curves are continuous, piece-wise smooth, and have low statistical
error. We use the ergodic interpolation-based approach to solve a large
multi-core design optimization problem with 31 design parameters. Our results
indicate that continuous space optimization using ergodic interpolation-based
embedding can be a viable approach for large multi-core design optimization
problems.Comment: A short version of this paper will be published in the proceedings of
IEEE MASCOTS 2015 conferenc
A statistical model for in vivo neuronal dynamics
Single neuron models have a long tradition in computational neuroscience.
Detailed biophysical models such as the Hodgkin-Huxley model as well as
simplified neuron models such as the class of integrate-and-fire models relate
the input current to the membrane potential of the neuron. Those types of
models have been extensively fitted to in vitro data where the input current is
controlled. Those models are however of little use when it comes to
characterize intracellular in vivo recordings since the input to the neuron is
not known. Here we propose a novel single neuron model that characterizes the
statistical properties of in vivo recordings. More specifically, we propose a
stochastic process where the subthreshold membrane potential follows a Gaussian
process and the spike emission intensity depends nonlinearly on the membrane
potential as well as the spiking history. We first show that the model has a
rich dynamical repertoire since it can capture arbitrary subthreshold
autocovariance functions, firing-rate adaptations as well as arbitrary shapes
of the action potential. We then show that this model can be efficiently fitted
to data without overfitting. Finally, we show that this model can be used to
characterize and therefore precisely compare various intracellular in vivo
recordings from different animals and experimental conditions.Comment: 31 pages, 10 figure
High-ISO long-exposure image denoising based on quantitative blob characterization
Blob detection and image denoising are fundamental, sometimes related tasks in computer vision. In this paper, we present a computational method to quantitatively measure blob characteristics using normalized unilateral second-order Gaussian kernels. This method suppresses non-blob structures while yielding a quantitative measurement of the position, prominence and scale of blobs, which can facilitate the tasks of blob reconstruction and blob reduction. Subsequently, we propose a denoising scheme to address high-ISO long-exposure noise, which sometimes spatially shows a blob appearance, employing a blob reduction procedure as a cheap preprocessing for conventional denoising methods. We apply the proposed denoising methods to real-world noisy images as well as standard images that are corrupted by real noise. The experimental results demonstrate the superiority of the proposed methods over state-of-the-art denoising methods
- …