15 research outputs found

    Computing Haar Measures

    Get PDF
    According to Haar's Theorem, every compact group GG admits a unique (regular, right and) left-invariant Borel probability measure μG\mu_G. Let the Haar integral (of GG) denote the functional ∫G:C(G)∋f↦∫f dμG\int_G:\mathcal{C}(G)\ni f\mapsto \int f\,d\mu_G integrating any continuous function f:G→Rf:G\to\mathbb{R} with respect to μG\mu_G. This generalizes, and recovers for the additive group G=[0;1)mod  1G=[0;1)\mod 1, the usual Riemann integral: computable (cmp. Weihrauch 2000, Theorem 6.4.1), and of computational cost characterizing complexity class #P1_1 (cmp. Ko 1991, Theorem 5.32). We establish that in fact every computably compact computable metric group renders the Haar integral computable: once asserting computability using an elegant synthetic argument, exploiting uniqueness in a computably compact space of probability measures; and once presenting and analyzing an explicit, imperative algorithm based on 'maximum packings' with rigorous error bounds and guaranteed convergence. Regarding computational complexity, for the groups SO(3)\mathcal{SO}(3) and SU(2)\mathcal{SU}(2) we reduce the Haar integral to and from Euclidean/Riemann integration. In particular both also characterize #P1_1. Implementation and empirical evaluation using the iRRAM C++ library for exact real computation confirms the (thus necessary) exponential runtime

    Computability in basic quantum mechanics

    Get PDF
    The basic notions of quantum mechanics are formulated in terms of separable infinite dimensional Hilbert space H. In terms of the Hilbert lattice L of closed linear subspaces of H the notions of state and observable can be formulated as kinds of measures as in [21]. The aim of this paper is to show that there is a good notion of computability for these data structures in the sense of Weihrauch’s Type Two Effectivity (TTE) [26]. Instead of explicitly exhibiting admissible representations for the data types under consideration we show that they do live within the category QCB0 which is equivalent to the category AdmRep of admissible representations and continuously realizable maps between them. For this purpose in case of observables we have to replace measures by valuations which allows us to prove an effective version of von Neumann’s Spectral Theorem

    Characterisation of the Set of Ground States of Uniformly Chaotic Finite-Range Lattice Models

    Full text link
    Chaotic dependence on temperature refers to the phenomenon of divergence of Gibbs measures as the temperature approaches a certain value. Models with chaotic behaviour near zero temperature have multiple ground states, none of which are stable. We study the class of uniformly chaotic models, that is, those in which, as the temperature goes to zero, every choice of Gibbs measures accumulates on the entire set of ground states. We characterise the possible sets of ground states of uniformly chaotic finite-range models up to computable homeomorphisms. Namely, we show that the set of ground states of every model with finite-range and rational-valued interactions is topologically closed and connected, and belongs to the class Π2\Pi_2 of the arithmetical hierarchy. Conversely, every Π2\Pi_2-computable, topologically closed and connected set of probability measures can be encoded (via a computable homeomorphism) as the set of ground states of a uniformly chaotic two-dimensional model with finite-range rational-valued interactions.Comment: 46 pages, 12 figure

    Probabilistic programming interfaces for random graphs: Markov categories, graphons, and nominal sets

    Get PDF
    We study semantic models of probabilistic programming languages over graphs, and establish a connection to graphons from graph theory and combinatorics. We show that every well-behaved equational theory for our graph probabilistic programming language corresponds to a graphon, and conversely, every graphon arises in this way. We provide three constructions for showing that every graphon arises from an equational theory. The first is an abstract construction, using Markov categories and monoidal indeterminates. The second and third are more concrete. The second is in terms of traditional measure theoretic probability, which covers ‘black-and-white’ graphons. The third is in terms of probability monads on the nominal sets of Gabbay and Pitts. Specifically, we use a variation of nominal sets induced by the theory of graphs, which covers Erdős-Rényi graphons. In this way, we build new models of graph probabilistic programming from graphons

    Probabilistic Programming Interfaces for Random Graphs::Markov Categories, Graphons, and Nominal Sets

    Get PDF
    We study semantic models of probabilistic programming languages over graphs, and establish a connection to graphons from graph theory and combinatorics. We show that every well-behaved equational theory for our graph probabilistic programming language corresponds to a graphon, and conversely, every graphon arises in this way.We provide three constructions for showing that every graphon arises from an equational theory. The first is an abstract construction, using Markov categories and monoidal indeterminates. The second and third are more concrete. The second is in terms of traditional measure theoretic probability, which covers 'black-and-white' graphons. The third is in terms of probability monads on the nominal sets of Gabbay and Pitts. Specifically, we use a variation of nominal sets induced by the theory of graphs, which covers Erdős-Rényi graphons. In this way, we build new models of graph probabilistic programming from graphons

    Rigorous computations of dynamical quantities

    Get PDF
    This thesis is concerned with rigorous computation of dynamical quantities. In particular, we provide rigorous computation of diffusion coefficients for uniformly expanding maps of the interval. Moreover, we provide a rigorous computational scheme for linear response and we apply it in the case of uniformly expanding circle maps. Our results have been implemented successfully on a computer. Examples are included to illustrate the computer implementation

    Convex Optimization Techniques for Geometric Covering Problems

    Get PDF
    The present thesis is a commencement of a generalization of covering results in specific settings, such as the Euclidean space or the sphere, to arbitrary compact metric spaces. In particular we consider coverings of compact metric spaces (X,d)(X,d) by balls of radius rr. We are interested in the minimum number of such balls needed to cover XX, denoted by \Ncal(X,r). For finite XX this problem coincides with an instance of the combinatorial \textsc{set cover} problem, which is NP\mathrm{NP}-complete. We illustrate approximation techniques based on the moment method of Lasserre for finite graphs and generalize these techniques to compact metric spaces XX to obtain upper and lower bounds for \Ncal(X,r). \\ The upper bounds in this thesis follow from the application of a greedy algorithm on the space XX. Its approximation quality is obtained by a generalization of the analysis of Chv\'atal's algorithm for the weighted case of \textsc{set cover}. We apply this greedy algorithm to the spherical case X=SnX=S^n and retrieve the best non-asymptotic bound of B\"or\"oczky and Wintsche. Additionally, the algorithm can be used to determine coverings of Euclidean space with arbitrary measurable objects having non-empty interior. The quality of these coverings slightly improves a bound of Nasz\'odi. \\ For the lower bounds we develop a sequence of bounds \Ncal^t(X,r) that converge after finitely (say α∈N\alpha\in\N) many steps: \Ncal^1(X,r)\leq \ldots \leq \Ncal^\alpha(X,r)=\Ncal(X,r). The drawback of this sequence is that the bounds \Ncal^t(X,r) are increasingly difficult to compute, since they are the objective values of infinite-dimensional conic programs whose number of constraints and dimension of underlying cones grow accordingly to tt. We show that these programs satisfy strong duality and derive a finite dimensional semidefinite program to approximate \Ncal^2(S^2,r) to arbitrary precision. Our results rely in part on the moment methods developed by de Laat and Vallentin for the packing problem on topological packing graphs. However, in the covering problem we have to deal with two types of constraints instead of one type as in packing problems and consequently additional work is required

    Computation and Consistent Estimation of Stationary Optimal Transport Plans

    Get PDF
    Informally, the optimal transport (OT) problem is to align, or couple, two distributions of interest as best as possible with respect to some prespecified cost. A coupling that achieves the minimum cost among all couplings is referred to as an OT plan; the cost of the OT plan is referred to as the OT cost. Researchers in statistics and machine learning have expended a great deal of effort to understand the properties of OT plans and costs. The motivation for this work stems partly from the fact that, unlike many other divergence measures and metrics between distributions, OT plans and costs describe relationships between distributions in a manner that respects the geometry of the underlying space (by way of the specified cost). However, this advantage does not necessarily carry over when standard OT techniques are applied to distributions with specific structure. In the case that the two distributions describe stationary stochastic processes, the OT problem may ignore the differences in the sequential dependence of either process. One must find a way to make the OT problem account for the stationary dependence of the marginal processes. In this thesis, we study OT for stationary processes, a field that we refer to as stationary optimal transport. Through example and theory, we argue that when applying OT to stationary processes, one should incorporate the stationarity into the problem directly -- constraining the set of allowed transport plans to those that are stationary themselves. In this way, we only consider transport plans that respect the dependence structure of the marginal processes. We study this constrained OT problem from statistical and computational perspectives, with an eye toward applications in machine learning and data science. In particular, we develop algorithms for computing stationary OT plans of Markov chains, extend these tools for Markov OT to the alignment and comparison of weighted graphs, and propose estimates of stationary OT plans based on finite sequences of observations. We build upon existing techniques in OT as well as draw from a variety of fields including Markov decision processes, graph theory, and ergodic theory. In doing this, we uncover new perspectives on OT and pave the way for additional applications and approaches in future work.Doctor of Philosoph

    Bayesian probabilistic numerical methods

    Get PDF
    The increasing complexity of computer models used to solve contemporary inference problems has been set against a decreasing rate of improvement in processor speed in recent years. As a result, in many of these problems numerical error is a challenge for practitioners. However, while there has been a recent push towards rigorous quantification of uncertainty in inference problems based upon computer models, numerical error is still largely required to be driven down to a level at which its impact on inferences is negligible. Probabilistic numerical methods have been proposed to alleviate this; these are a class of numerical methods that return probabilistic uncertainty quantification for their numerical error. The attraction of such methods is clear: if numerical error in the computer model and uncertainty in an inference problem are quantified in a unified framework then careful tuning of numerical methods to mitigate the impact of numerical error on inferences could become unnecessary. In this thesis we introduce the class of Bayesian probabilistic numerical methods, whose uncertainty has a strict and rigorous Bayesian interpretation. A number of examples of conjugate Bayesian probabilistic numerical methods are presented before we present analysis and algorithms for the general case, in which the posterior distribution does not posess a closed form. We conclude by studying how these methods can be rigorously composed to yield Bayesian pipelines of computation. Throughout we present applications of the developed methods to real-world inference problems, and indicate that the uncertainty quantification provided by these methods can be of significant practical use
    corecore