1,381 research outputs found

    A foundation for synthesising programming language semantics

    Get PDF
    Programming or scripting languages used in real-world systems are seldom designed with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal semantics. This can take months or years of effort. Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging, as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis contains an analysis of their challenge, as well as the first steps towards a solution. Scaling methods with the size of the language is very difficult due to state space explosion, so this thesis proposes an incremental approach to learning the translation rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the conditions for incremental learning. The central definition of the new formalisation is the desugaring extension problem, i.e. extending a set of established translation rules by synthesising new ones. In a synthesis algorithm, the choice of search space is important and non-trivial, as it needs to strike a good balance between expressiveness and efficiency. The rest of the thesis focuses on defining search spaces for translation rules via typing rules. Two prerequisites are required for comparing search spaces. The first is a series of benchmarks, a set of source and target languages equipped with intended translation rules between them. The second is an enumerative synthesis algorithm for efficiently enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected from a type system for ensuring that typed programs be efficiently enumerable. The thesis presents and empirically evaluates two search spaces. A baseline search space yields the first practical solution to the challenge. The second search space is based on a natural heuristic for translation rules, limiting the usage of variables so that they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis and empirical comparison to the baseline, I then show that using linear types can speed up the synthesis of translation rules by an order of magnitude

    Classical and quantum algorithms for scaling problems

    Get PDF
    This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    The Diophantine problem in Chevalley groups

    Full text link
    In this paper we study the Diophantine problem in Chevalley groups Gπ(Φ,R)G_\pi (\Phi,R), where Φ\Phi is an indecomposable root system of rank >1> 1, RR is an arbitrary commutative ring with 11. We establish a variant of double centralizer theorem for elementary unipotents xα(1)x_\alpha(1). This theorem is valid for arbitrary commutative rings with 11. The result is principle to show that any one-parametric subgroup XαX_\alpha, α∈Φ\alpha \in \Phi, is Diophantine in GG. Then we prove that the Diophantine problem in Gπ(Φ,R)G_\pi (\Phi,R) is polynomial time equivalent (more precisely, Karp equivalent) to the Diophantine problem in RR. This fact gives rise to a number of model-theoretic corollaries for specific types of rings.Comment: 44 page

    Efficient Model Checking: The Power of Randomness

    Get PDF

    Strong Invariants Are Hard: On the Hardness of Strongest Polynomial Invariants for (Probabilistic) Programs

    Full text link
    We show that computing the strongest polynomial invariant for single-path loops with polynomial assignments is at least as hard as the Skolem problem, a famous problem whose decidability has been open for almost a century. While the strongest polynomial invariants are computable for affine loops, for polynomial loops the problem remained wide open. As an intermediate result of independent interest, we prove that reachability for discrete polynomial dynamical systems is Skolem-hard as well. Furthermore, we generalize the notion of invariant ideals and introduce moment invariant ideals for probabilistic programs. With this tool, we further show that the strongest polynomial moment invariant is (i) uncomputable, for probabilistic loops with branching statements, and (ii) Skolem-hard to compute for polynomial probabilistic loops without branching statements. Finally, we identify a class of probabilistic loops for which the strongest polynomial moment invariant is computable and provide an algorithm for it

    Algorithmic aspects of immersibility and embeddability

    Full text link
    We analyze an algorithmic question about immersion theory: for which mm, nn, and CAT=DiffCAT=\mathbf{Diff} or PL\mathbf{PL} is the question of whether an mm-dimensional CATCAT-manifold is immersible in Rn\mathbb{R}^n decidable? As a corollary, we show that the smooth embeddability of an mm-manifold with boundary in Rn\mathbb{R}^n is undecidable when n−mn-m is even and 11m≥10n+111m \geq 10n+1.Comment: 20 pages, 1 figure. Revised in response to comments by several referees, no major changes in mathematical conten

    A rewriting coherence theorem with applications in homotopy type theory

    Get PDF
    Higher-dimensional rewriting systems are tools to analyse the structure of formally reducing terms to normal forms, as well as comparing the different reduction paths that lead to those normal forms. This higher structure can be captured by finding a homotopy basis for the rewriting system. We show that the basic notions of confluence and wellfoundedness are sufficient to recursively build such a homotopy basis, with a construction reminiscent of an argument by Craig C. Squier. We then go on to translate this construction to the setting of homotopy type theory, where managing equalities between paths is important in order to construct functions which are coherent with respect to higher dimensions. Eventually, we apply the result to approximate a series of open questions in homotopy type theory, such as the characterisation of the homotopy groups of the free group on a set and the pushout of 1-types. This paper expands on our previous conference contribution Coherence via Wellfoundedness by laying out the construction in the language of higher-dimensional rewriting

    G\"odel-Dummett linear temporal logic

    Full text link
    We investigate a version of linear temporal logic whose propositional fragment is G\"odel-Dummett logic (which is well known both as a superintuitionistic logic and a t-norm fuzzy logic). We define the logic using two natural semantics: first a real-valued semantics, where statements have a degree of truth in the real unit interval and second a `bi-relational' semantics. We then show that these two semantics indeed define one and the same logic: the statements that are valid for the real-valued semantics are the same as those that are valid for the bi-relational semantics. This G\"odel temporal logic does not have any form of the finite model property for these two semantics: there are non-valid statements that can only be falsified on an infinite model. However, by using the technical notion of a quasimodel, we show that every falsifiable statement is falsifiable on a finite quasimodel, yielding an algorithm for deciding if a statement is valid or not. Later, we strengthen this decidability result by giving an algorithm that uses only a polynomial amount of memory, proving that G\"odel temporal logic is PSPACE-complete. We also provide a deductive calculus for G\"odel temporal logic, and show this calculus to be sound and complete for the above-mentioned semantics, so that all (and only) the valid statements can be proved with this calculus.Comment: arXiv admin note: substantial text overlap with arXiv:2205.00574, arXiv:2205.0518

    Prefix monoids of groups and right units of special inverse monoids

    Full text link
    A prefix monoid is a finitely generated submonoid of a finitely presented group generated by the prefixes of its defining relators. Important results of Guba (1997), and of Ivanov, Margolis and Meakin (2001), show how the word problem for certain one-relator monoids, and inverse monoids, can be reduced to solving the membership problem in prefix monoids of certain one-relator groups. Motivated by this, in this paper we study the class of prefix monoids of finitely presented groups. We obtain a complete description of this class of monoids. All monoids in this family are finitely generated, recursively presented and group-embeddable. Our results show that not every finitely generated recursively presented group-embeddable monoid is a prefix monoid, but for every such monoid if we take a free product with a suitably chosen free monoid of finite rank, then we do obtain a prefix monoid. Conversely we prove that every prefix monoid arises in this way. Also, we show that the groups that arise as groups of units of prefix monoids are precisely the finitely generated recursively presented groups, while the groups that arise as Sch\"utzenberger groups of prefix monoids are exactly the recursively enumerable subgroups of finitely presented groups. We obtain an analogous result classifying the Sch\"utzenberger groups of monoids of right units of special inverse monoids. We also give some examples of right cancellative monoids arising as monoids of right units of finitely presented special inverse monoids, and show that not all right cancellative recursively presented monoids belong to this class.Comment: 22 page
    • …
    corecore