278 research outputs found
Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis
We show how the discovery of robust scalable numerical solvers for arbitrary
bounded linear operators can be automated as a Game Theory problem by
reformulating the process of computing with partial information and limited
resources as that of playing underlying hierarchies of adversarial information
games. When the solution space is a Banach space endowed with a quadratic
norm , the optimal measure (mixed strategy) for such games (e.g. the
adversarial recovery of , given partial measurements with
, using relative error in -norm as a loss) is a
centered Gaussian field solely determined by the norm , whose
conditioning (on measurements) produces optimal bets. When measurements are
hierarchical, the process of conditioning this Gaussian field produces a
hierarchy of elementary bets (gamblets). These gamblets generalize the notion
of Wavelets and Wannier functions in the sense that they are adapted to the
norm and induce a multi-resolution decomposition of that is
adapted to the eigensubspaces of the operator defining the norm .
When the operator is localized, we show that the resulting gamblets are
localized both in space and frequency and introduce the Fast Gamblet Transform
(FGT) with rigorous accuracy and (near-linear) complexity estimates. As the FFT
can be used to solve and diagonalize arbitrary PDEs with constant coefficients,
the FGT can be used to decompose a wide range of continuous linear operators
(including arbitrary continuous linear bijections from to or
to ) into a sequence of independent linear systems with uniformly bounded
condition numbers and leads to
solvers and eigenspace adapted Multiresolution Analysis (resulting in near
linear complexity approximation of all eigensubspaces).Comment: 142 pages. 14 Figures. Presented at AFOSR (Aug 2016), DARPA (Sep
2016), IPAM (Apr 3, 2017), Hausdorff (April 13, 2017) and ICERM (June 5,
2017
Large Data and Zero Noise Limits of Graph-Based Semi-Supervised Learning Algorithms
Scalings in which the graph Laplacian approaches a differential operator in the large graph limit are used to develop understanding of a number of algorithms for semi-supervised learning; in particular the extension, to this graph setting, of the probit algorithm, level set and kriging methods, are studied. Both optimization and Bayesian approaches are considered, based around a regularizing quadratic form found from an affine transformation of the Laplacian, raised to a, possibly fractional, exponent. Conditions on the parameters defining this quadratic form are identified under which well-defined limiting continuum analogues of the optimization and Bayesian semi-supervised learning problems may be found, thereby shedding light on the design of algorithms in the large graph setting. The large graph limits of the optimization formulations are tackled through Γ−convergence, using the recently introduced TL^p metric. The small labelling noise limits of the Bayesian formulations are also identified, and contrasted with pre-existing harmonic function approaches to the problem
Eigenvalue Bounds on Restrictions of Reversible Nearly Uncoupled Markov Chains
AbstractIn this paper we analyze decompositions of reversible nearly uncoupled Markov chains into rapidly mixing subchains. We state upper bounds on the 2nd eigenvalue for restriction and stochastic complementation chains of reversible Markov chains, as well as a relation between them. We illustrate the obtained bounds analytically for bunkbed graphs, and furthermore apply them to restricted Markov chains that arise when analyzing conformation dynamics of a small biomolecule
Manifold structured prediction
Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure. While classical approaches consider finite, albeit potentially huge, output spaces, in this paper we discuss how structured prediction can be extended to a continuous scenario. Specifically, we study a structured prediction approach to manifold-valued regression. We characterize a class of problems for which the considered approach is statistically consistent and study how geometric optimization can be used to compute the corresponding estimator. Promising experimental results on both simulated and real data complete our study
Manifold Structured Prediction
Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure. While classical approaches
consider finite, albeit potentially huge, output spaces, in this paper we discuss how
structured prediction can be extended to a continuous scenario. Specifically, we
study a structured prediction approach to manifold valued regression. We characterize a class of problems for which the considered approach is statistically consistent
and study how geometric optimization can be used to compute the corresponding
estimator. Promising experimental results on both simulated and real data complete
our stud
Learning non-Gaussian graphical models via Hessian scores and triangular transport
Undirected probabilistic graphical models represent the conditional
dependencies, or Markov properties, of a collection of random variables.
Knowing the sparsity of such a graphical model is valuable for modeling
multivariate distributions and for efficiently performing inference. While the
problem of learning graph structure from data has been studied extensively for
certain parametric families of distributions, most existing methods fail to
consistently recover the graph structure for non-Gaussian data. Here we propose
an algorithm for learning the Markov structure of continuous and non-Gaussian
distributions. To characterize conditional independence, we introduce a score
based on integrated Hessian information from the joint log-density, and we
prove that this score upper bounds the conditional mutual information for a
general class of distributions. To compute the score, our algorithm SING
estimates the density using a deterministic coupling, induced by a triangular
transport map, and iteratively exploits sparse structure in the map to reveal
sparsity in the graph. For certain non-Gaussian datasets, we show that our
algorithm recovers the graph structure even with a biased approximation to the
density. Among other examples, we apply sing to learn the dependencies between
the states of a chaotic dynamical system with local interactions.Comment: 40 pages, 12 figure
- …