575 research outputs found

    Controlled Sequential Monte Carlo

    Full text link
    Sequential Monte Carlo methods, also known as particle methods, are a popular set of techniques for approximating high-dimensional probability distributions and their normalizing constants. These methods have found numerous applications in statistics and related fields; e.g. for inference in non-linear non-Gaussian state space models, and in complex static models. Like many Monte Carlo sampling schemes, they rely on proposal distributions which crucially impact their performance. We introduce here a class of controlled sequential Monte Carlo algorithms, where the proposal distributions are determined by approximating the solution to an associated optimal control problem using an iterative scheme. This method builds upon a number of existing algorithms in econometrics, physics, and statistics for inference in state space models, and generalizes these methods so as to accommodate complex static models. We provide a theoretical analysis concerning the fluctuation and stability of this methodology that also provides insight into the properties of related algorithms. We demonstrate significant gains over state-of-the-art methods at a fixed computational complexity on a variety of applications

    Training Winner-Take-All Simultaneous Recurrent Neural Networks

    Get PDF
    The winner-take-all (WTA) network is useful in database management, very large scale integration (VLSI) design, and digital processing. The synthesis procedure of WTA on single-layer fully connected architecture with sigmoid transfer function is still not fully explored. We discuss the use of simultaneous recurrent networks (SRNs) trained by Kalman filter algorithms for the task of finding the maximum among N numbers. The simulation demonstrates the effectiveness of our training approach under conditions of a shared-weight SRN architecture. A more general SRN also succeeds in solving a real classification application on car engine data

    Hessian barrier algorithms for linearly constrained optimization problems

    Get PDF
    In this paper, we propose an interior-point method for linearly constrained optimization problems (possibly nonconvex). The method - which we call the Hessian barrier algorithm (HBA) - combines a forward Euler discretization of Hessian Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent (MD), and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a non-degeneracy condition, the algorithm converges to the problem's set of critical points; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is O(1/kρ)\mathcal{O}(1/k^\rho) for some ρ(0,1]\rho\in(0,1] that depends only on the choice of kernel function (i.e., not on the problem's primitives). These theoretical results are validated by numerical experiments in standard non-convex test functions and large-scale traffic assignment problems.Comment: 27 pages, 6 figure

    Sequential Kernel Herding: Frank-Wolfe Optimization for Particle Filtering

    Get PDF
    Recently, the Frank-Wolfe optimization algorithm was suggested as a procedure to obtain adaptive quadrature rules for integrals of functions in a reproducing kernel Hilbert space (RKHS) with a potentially faster rate of convergence than Monte Carlo integration (and "kernel herding" was shown to be a special case of this procedure). In this paper, we propose to replace the random sampling step in a particle filter by Frank-Wolfe optimization. By optimizing the position of the particles, we can obtain better accuracy than random or quasi-Monte Carlo sampling. In applications where the evaluation of the emission probabilities is expensive (such as in robot localization), the additional computational cost to generate the particles through optimization can be justified. Experiments on standard synthetic examples as well as on a robot localization task indicate indeed an improvement of accuracy over random and quasi-Monte Carlo sampling.Comment: in 18th International Conference on Artificial Intelligence and Statistics (AISTATS), May 2015, San Diego, United States. 38, JMLR Workshop and Conference Proceeding

    Selecting time-series hyperparameters with the artificial jackknife

    Full text link
    This article proposes a generalisation of the delete-dd jackknife to solve hyperparameter selection problems for time series. This novel technique is compatible with dependent data since it substitutes the jackknife removal step with a fictitious deletion, wherein observed datapoints are replaced with artificial missing values. In order to emphasise this point, I called this methodology artificial delete-dd jackknife. As an illustration, it is used to regulate vector autoregressions with an elastic-net penalty on the coefficients. A software implementation, ElasticNetVAR.jl, is available on GitHub

    Economics and the Complexity Vision: Chimerical Partners or Elysian Adventurers?

    Get PDF
    This work began as a review article of: "Complexity and the History of Economic Thought", edited by David Colander, Routledge, London,UK, 2000; & "The Complexity Vision and the Teaching of Economics", edited by David Colander, Edward Elgar, Cheltenham, UK, 2000. It has, in the writing, developed into my own vision of complexity economics

    Learning Program Specifications from Sample Runs

    Get PDF
    With science fiction of yore being reality recently with self-driving cars, wearable computers and autonomous robots, software reliability is growing increasingly important. A critical pre-requisite to ensure the software that controls such systems is correct is the availability of precise specifications that describe a program\u27s intended behaviors. Generating these specifications manually is a challenging, often unsuccessful, exercise; unfortunately, existing static analysis techniques often produce poor quality specifications that are ineffective in aiding program verification tasks. In this dissertation, we present a recent line of work on automated synthesis of specifications that overcome many of the deficiencies that plague existing specification inference methods. Our main contribution is a formulation of the problem as a sample driven one, in which specifications, represented as terms in a decidable refinement type representation, are discovered from observing a program\u27s sample runs in terms of either program execution paths or input-output values, and automatically verified through the use of expressive refinement type systems. Our approach is realized as a series of inductive synthesis frameworks, which use various logic-based or classification-based learning algorithms to provide sound and precise machine-checked specifications. Experimental results indicate that the learning algorithms are both efficient and effective, capable of automatically producing sophisticated specifications in nontrivial hypothesis domains over a range of complex real-world programs, going well beyond the capabilities of existing solutions
    corecore