937 research outputs found
Recommended from our members
The Mathematics and Statistics of Quantitative Risk Management
[no abstract available
A survey of the Schr\"odinger problem and some of its connections with optimal transport
This article is aimed at presenting the Schr\"odinger problem and some of its
connections with optimal transport. We hope that it can be used as a basic
user's guide to Schr\"odinger problem. We also give a survey of the related
literature. In addition, some new results are proved.Comment: To appear in Discrete \& Continuous Dynamical Systems - Series A.
Special issue on optimal transpor
The History of the Quantitative Methods in Finance Conference Series. 1992-2007
This report charts the history of the Quantitative Methods in Finance (QMF) conference from its beginning in 1993 to the 15th conference in 2007. It lists alphabetically the 1037 speakers who presented at all 15 conferences and the titles of their papers.
Self-Adversarially Learned Bayesian Sampling
Scalable Bayesian sampling is playing an important role in modern machine
learning, especially in the fast-developed unsupervised-(deep)-learning models.
While tremendous progresses have been achieved via scalable Bayesian sampling
such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient
descent (SVGD), the generated samples are typically highly correlated.
Moreover, their sample-generation processes are often criticized to be
inefficient. In this paper, we propose a novel self-adversarial learning
framework that automatically learns a conditional generator to mimic the
behavior of a Markov kernel (transition kernel). High-quality samples can be
efficiently generated by direct forward passes though a learned generator. Most
importantly, the learning process adopts a self-learning paradigm, requiring no
information on existing Markov kernels, e.g., knowledge of how to draw samples
from them. Specifically, our framework learns to use current samples, either
from the generator or pre-provided training data, to update the generator such
that the generated samples progressively approach a target distribution, thus
it is called self-learning. Experiments on both synthetic and real datasets
verify advantages of our framework, outperforming related methods in terms of
both sampling efficiency and sample quality.Comment: AAAI 201
Recommended from our members
Exploring Probability Measures with Markov Processes
In many domains where mathematical modelling is applied, a deterministic description of the system at hand is insufficient, and so it is useful to model systems as being in some way stochastic. This is often achieved by modeling the state of the system as being drawn from a probability measure, which is usually given algebraically, i.e. as a formula. While this representation can be useful for deriving certain characteristics of the system, it is by now well-appreciated that many questions about stochastic systems are best-answered by looking at samples from the associated probability measure. In this thesis, we seek to develop and analyse efficient techniques for generating samples from a given probability measure, with a focus on algorithms which simulate a Markov process with the desired invariant measure.
The first work presented in this thesis considers the use of Piecewise-Deterministic Markov Processes (PDMPs) for generating samples. In contrast to usual approaches, PDMPs are i) defined as continuous-time processes, and ii) are typically non-reversible with respect to their invariant measure. These distinctions pose computational and theoretical challenges for the design, analysis, and implementation of PDMP-based samplers. The key contribution of this work is to develop a transparent characterisation of how one can construct a PDMP (within the class of trajectorially-reversible processes) which admits the desired invariant measure, and to offer actionable recommendations on how these processes should be designed in practice.
The second work presented in this thesis considers the task of sampling from a probability measure on a discrete space. While work in recent years has made it possible to apply sampling algorithms to probability measures with differentiable densities on continuous spaces in a reasonably generic way, samplers on discrete spaces are still largely derived on a case-by-case basis. The contention of this work is that this is not necessary, and that one can in fact define quite generally-applicable algorithms which can sample efficiently from discrete probability measures. The contributions are then to propose a small collection of algorithms for this task, and verify their efficiency empirically. Building
on the previous chapter’s work, our samplers are again defined in continuous time and non-reversible, each of which offer noticeable benefits in efficiency.
The third work presented in this thesis concerns a theoretical study of a particular class of Markov Chain-based sampling algorithms which make use of parallel computing resources. The Markov Chains which are produced by this algorithm are mathematically equivalent to a standard Metropolis-Hastings chain, but their real-time convergence properties are affected nontrivially by the application of parallelism. The contribution of this work is to analyse the convergence behaviour of these chains, and to use the ‘optimal scaling’ framework (as developed by Roberts, Rosenthal, and others) to make recommendations concerning the tuning of such algorithms in practice.
The introductory chapters provide a general overview on the task of generating samples from a probability measure, with particular focus on methods involving Markov processes. There is also an interlude on the relative benefits of i) continuous-time and ii) non-reversible Markov processes for sampling, which are intended to provide additional context for the reading of the first two works.PhD Studentship paid for by Cantab Capital Institute for the Mathematics of Informatio
- …