21 research outputs found

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature

    Using Isabelle/HOL to verify first-order relativity theory

    Get PDF
    Logicians at the RĂ©nyi Mathematical Institute in Budapest have spent several years developing versions of relativity theory (special, general, and other variants) based wholly on first-order logic, and have argued in favour of the physical decidability, via exploitation of cosmological phenomena, of formally unsolvable questions such as the Halting Problem and the consistency of set theory. As part of a joint project, researchers at Sheffield have recently started generating rigorous machine-verified versions of the Hungarian proofs, so as to demonstrate the soundness of their work. In this paper, we explain the background to the project and demonstrate a first-order proof in Isabelle/HOL of the theorem “no inertial observer can travel faster than light”. This approach to physical theories and physical computability has several pay-offs, because the precision with which physical theories need to be formalised within automated proof systems forces us to recognise subtly hidden assumptions

    Computing Haar Measures

    Get PDF
    According to Haar's Theorem, every compact group GG admits a unique (regular, right and) left-invariant Borel probability measure ÎŒG\mu_G. Let the Haar integral (of GG) denote the functional ∫G:C(G)∋f↊∫f dÎŒG\int_G:\mathcal{C}(G)\ni f\mapsto \int f\,d\mu_G integrating any continuous function f:G→Rf:G\to\mathbb{R} with respect to ÎŒG\mu_G. This generalizes, and recovers for the additive group G=[0;1)mod  1G=[0;1)\mod 1, the usual Riemann integral: computable (cmp. Weihrauch 2000, Theorem 6.4.1), and of computational cost characterizing complexity class #P1_1 (cmp. Ko 1991, Theorem 5.32). We establish that in fact every computably compact computable metric group renders the Haar integral computable: once asserting computability using an elegant synthetic argument, exploiting uniqueness in a computably compact space of probability measures; and once presenting and analyzing an explicit, imperative algorithm based on 'maximum packings' with rigorous error bounds and guaranteed convergence. Regarding computational complexity, for the groups SO(3)\mathcal{SO}(3) and SU(2)\mathcal{SU}(2) we reduce the Haar integral to and from Euclidean/Riemann integration. In particular both also characterize #P1_1. Implementation and empirical evaluation using the iRRAM C++ library for exact real computation confirms the (thus necessary) exponential runtime

    PCC '06 / 5th International Workshop on Proof, Computation, Complexity, Ilmenau, July 24 - 25, 2006.

    Get PDF

    Lower bounds on the redundancy in computations from random oracles via betting strategies with restricted wagers

    Get PDF
    The Kučera–GĂĄcs theorem is a landmark result in algorithmic randomness asserting that every real is computable from a Martin-Löf random real. If the computation of the first n bits of a sequence requires n+h(n) bits of the random oracle, then h is the redundancy of the computation. Kučera implicitly achieved redundancy nlog⁥n while GĂĄcs used a more elaborate coding procedure which achieves redundancy View the MathML source. A similar bound is implicit in the later proof by Merkle and Mihailović. In this paper we obtain optimal strict lower bounds on the redundancy in computations from Martin-Löf random oracles. We show that any nondecreasing computable function g such that ∑n2−g(n)=∞ is not a general upper bound on the redundancy in computations from Martin-Löf random oracles. In fact, there exists a real X such that the redundancy g of any computation of X from a Martin-Löf random oracle satisfies ∑n2−g(n)<∞. Moreover, the class of such reals is comeager and includes a View the MathML source real as well as all weakly 2-generic reals. On the other hand, it has been recently shown that any real is computable from a Martin-Löf random oracle with redundancy g, provided that g is a computable nondecreasing function such that ∑n2−g(n)<∞. Hence our lower bound is optimal, and excludes many slow growing functions such as log⁥n from bounding the redundancy in computations from random oracles for a large class of reals. Our results are obtained as an application of a theory of effective betting strategies with restricted wagers which we develop

    Coinductive Formal Reasoning in Exact Real Arithmetic

    Full text link
    In this article we present a method for formally proving the correctness of the lazy algorithms for computing homographic and quadratic transformations -- of which field operations are special cases-- on a representation of real numbers by coinductive streams. The algorithms work on coinductive stream of M\"{o}bius maps and form the basis of the Edalat--Potts exact real arithmetic. We use the machinery of the Coq proof assistant for the coinductive types to present the formalisation. The formalised algorithms are only partially productive, i.e., they do not output provably infinite streams for all possible inputs. We show how to deal with this partiality in the presence of syntactic restrictions posed by the constructive type theory of Coq. Furthermore we show that the type theoretic techniques that we develop are compatible with the semantics of the algorithms as continuous maps on real numbers. The resulting Coq formalisation is available for public download.Comment: 40 page

    Dimension Extractors and Optimal Decompression

    Full text link
    A *dimension extractor* is an algorithm designed to increase the effective dimension -- i.e., the amount of computational randomness -- of an infinite binary sequence, in order to turn a "partially random" sequence into a "more random" sequence. Extractors are exhibited for various effective dimensions, including constructive, computable, space-bounded, time-bounded, and finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is examined from the perspective of decompression, by showing that every infinite sequence S is Turing reducible to a Martin-Loef random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S, which is shown to be the optimal ratio of query bits to computed bits achievable with Turing reductions. The extractors and decompressors that are developed lead directly to new characterizations of some effective dimensions in terms of optimal decompression by Turing reductions.Comment: This report was combined with a different conference paper "Every Sequence is Decompressible from a Random One" (cs.IT/0511074, at http://dx.doi.org/10.1007/11780342_17), and both titles were changed, with the conference paper incorporated as section 5 of this new combined paper. The combined paper was accepted to the journal Theory of Computing Systems, as part of a special issue of invited papers from the second conference on Computability in Europe, 200
    corecore