937 research outputs found

    Data-Based Inference of Generators for Markov Jump Processes Using Convex Optimization

    Full text link

    A survey of the Schr\"odinger problem and some of its connections with optimal transport

    Full text link
    This article is aimed at presenting the Schr\"odinger problem and some of its connections with optimal transport. We hope that it can be used as a basic user's guide to Schr\"odinger problem. We also give a survey of the related literature. In addition, some new results are proved.Comment: To appear in Discrete \& Continuous Dynamical Systems - Series A. Special issue on optimal transpor

    The History of the Quantitative Methods in Finance Conference Series. 1992-2007

    Get PDF
    This report charts the history of the Quantitative Methods in Finance (QMF) conference from its beginning in 1993 to the 15th conference in 2007. It lists alphabetically the 1037 speakers who presented at all 15 conferences and the titles of their papers.

    Self-Adversarially Learned Bayesian Sampling

    Full text link
    Scalable Bayesian sampling is playing an important role in modern machine learning, especially in the fast-developed unsupervised-(deep)-learning models. While tremendous progresses have been achieved via scalable Bayesian sampling such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD), the generated samples are typically highly correlated. Moreover, their sample-generation processes are often criticized to be inefficient. In this paper, we propose a novel self-adversarial learning framework that automatically learns a conditional generator to mimic the behavior of a Markov kernel (transition kernel). High-quality samples can be efficiently generated by direct forward passes though a learned generator. Most importantly, the learning process adopts a self-learning paradigm, requiring no information on existing Markov kernels, e.g., knowledge of how to draw samples from them. Specifically, our framework learns to use current samples, either from the generator or pre-provided training data, to update the generator such that the generated samples progressively approach a target distribution, thus it is called self-learning. Experiments on both synthetic and real datasets verify advantages of our framework, outperforming related methods in terms of both sampling efficiency and sample quality.Comment: AAAI 201
    • …
    corecore