278 research outputs found

    Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis

    Get PDF
    We show how the discovery of robust scalable numerical solvers for arbitrary bounded linear operators can be automated as a Game Theory problem by reformulating the process of computing with partial information and limited resources as that of playing underlying hierarchies of adversarial information games. When the solution space is a Banach space BB endowed with a quadratic norm \|\cdot\|, the optimal measure (mixed strategy) for such games (e.g. the adversarial recovery of uBu\in B, given partial measurements [ϕi,u][\phi_i, u] with ϕiB\phi_i\in B^*, using relative error in \|\cdot\|-norm as a loss) is a centered Gaussian field ξ\xi solely determined by the norm \|\cdot\|, whose conditioning (on measurements) produces optimal bets. When measurements are hierarchical, the process of conditioning this Gaussian field produces a hierarchy of elementary bets (gamblets). These gamblets generalize the notion of Wavelets and Wannier functions in the sense that they are adapted to the norm \|\cdot\| and induce a multi-resolution decomposition of BB that is adapted to the eigensubspaces of the operator defining the norm \|\cdot\|. When the operator is localized, we show that the resulting gamblets are localized both in space and frequency and introduce the Fast Gamblet Transform (FGT) with rigorous accuracy and (near-linear) complexity estimates. As the FFT can be used to solve and diagonalize arbitrary PDEs with constant coefficients, the FGT can be used to decompose a wide range of continuous linear operators (including arbitrary continuous linear bijections from H0sH^s_0 to HsH^{-s} or to L2L^2) into a sequence of independent linear systems with uniformly bounded condition numbers and leads to O(NpolylogN)\mathcal{O}(N \operatorname{polylog} N) solvers and eigenspace adapted Multiresolution Analysis (resulting in near linear complexity approximation of all eigensubspaces).Comment: 142 pages. 14 Figures. Presented at AFOSR (Aug 2016), DARPA (Sep 2016), IPAM (Apr 3, 2017), Hausdorff (April 13, 2017) and ICERM (June 5, 2017

    Large Data and Zero Noise Limits of Graph-Based Semi-Supervised Learning Algorithms

    Get PDF
    Scalings in which the graph Laplacian approaches a differential operator in the large graph limit are used to develop understanding of a number of algorithms for semi-supervised learning; in particular the extension, to this graph setting, of the probit algorithm, level set and kriging methods, are studied. Both optimization and Bayesian approaches are considered, based around a regularizing quadratic form found from an affine transformation of the Laplacian, raised to a, possibly fractional, exponent. Conditions on the parameters defining this quadratic form are identified under which well-defined limiting continuum analogues of the optimization and Bayesian semi-supervised learning problems may be found, thereby shedding light on the design of algorithms in the large graph setting. The large graph limits of the optimization formulations are tackled through Γ−convergence, using the recently introduced TL^p metric. The small labelling noise limits of the Bayesian formulations are also identified, and contrasted with pre-existing harmonic function approaches to the problem

    Eigenvalue Bounds on Restrictions of Reversible Nearly Uncoupled Markov Chains

    Get PDF
    AbstractIn this paper we analyze decompositions of reversible nearly uncoupled Markov chains into rapidly mixing subchains. We state upper bounds on the 2nd eigenvalue for restriction and stochastic complementation chains of reversible Markov chains, as well as a relation between them. We illustrate the obtained bounds analytically for bunkbed graphs, and furthermore apply them to restricted Markov chains that arise when analyzing conformation dynamics of a small biomolecule

    Manifold structured prediction

    Get PDF
    Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure. While classical approaches consider finite, albeit potentially huge, output spaces, in this paper we discuss how structured prediction can be extended to a continuous scenario. Specifically, we study a structured prediction approach to manifold-valued regression. We characterize a class of problems for which the considered approach is statistically consistent and study how geometric optimization can be used to compute the corresponding estimator. Promising experimental results on both simulated and real data complete our study

    Manifold Structured Prediction

    Get PDF
    Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure. While classical approaches consider finite, albeit potentially huge, output spaces, in this paper we discuss how structured prediction can be extended to a continuous scenario. Specifically, we study a structured prediction approach to manifold valued regression. We characterize a class of problems for which the considered approach is statistically consistent and study how geometric optimization can be used to compute the corresponding estimator. Promising experimental results on both simulated and real data complete our stud

    Learning non-Gaussian graphical models via Hessian scores and triangular transport

    Full text link
    Undirected probabilistic graphical models represent the conditional dependencies, or Markov properties, of a collection of random variables. Knowing the sparsity of such a graphical model is valuable for modeling multivariate distributions and for efficiently performing inference. While the problem of learning graph structure from data has been studied extensively for certain parametric families of distributions, most existing methods fail to consistently recover the graph structure for non-Gaussian data. Here we propose an algorithm for learning the Markov structure of continuous and non-Gaussian distributions. To characterize conditional independence, we introduce a score based on integrated Hessian information from the joint log-density, and we prove that this score upper bounds the conditional mutual information for a general class of distributions. To compute the score, our algorithm SING estimates the density using a deterministic coupling, induced by a triangular transport map, and iteratively exploits sparse structure in the map to reveal sparsity in the graph. For certain non-Gaussian datasets, we show that our algorithm recovers the graph structure even with a biased approximation to the density. Among other examples, we apply sing to learn the dependencies between the states of a chaotic dynamical system with local interactions.Comment: 40 pages, 12 figure
    corecore