11 research outputs found

    Denotational Fixed-Point Semantics for Constructive Scheduling of Synchronous Concurrency

    Get PDF
    The synchronous model of concurrent computation (SMoCC) is well established for programming languages in the domain of safety-critical reactive and embedded systems. Translated into mainstream C/Java programming, the SMoCC corresponds to a cyclic execution model in which concurrent threads are synchronised on a logical clock that cuts system computation into a sequence of macro-steps. A causality analysis verifies the existence of a schedule on memory accesses to ensure each macro-step is deadlock-free and determinate. We introduce an abstract semantic domain I(D, P) and an associated denotational fixed point semantics for reasoning about concurrent and sequential variable accesses within a synchronous cycle-based model of computation. We use this domain for a new and extended behavioural definition of Berry’s causality analysis in terms of approximation intervals. The domain I(D, P) extends the domain I(D) from our previous work and fixes a mistake in the treatment of initialisations. Based on this fixed point semantics the notion of Input Berry-constructiveness (IBC) for synchronous programs is proposed. This new IBC class lies properly between strong (SBC) and normal Berry-constructiveness (BC) defined in previous work. SBC and BC are two ways to interpret the standard constructive semantics of synchronous programming, as exemplified by imperative SMoCC languages such as Esterel or Quartz. SBC is often too restrictive as it requires all variables to be initialised by the program. BC can be too permissive because it initialises all variables to a fixed value, by default. Where the initialisation happens through the memory, e.g., when carrying values from one synchronous tick to the next, then IBC is more appropriate. IBC links two levels of execution, the macro-step level and the micro-step level. We prove that the denotational fixed point analysis for IBC, and hence Berry’s causality analysis, is sound with respect to operational micro-level scheduling. The denotational model can thus be viewed as a compositional presentation of a synchronous scheduling strategy that ensures reactiveness and determinacy for imperative concurrent programming

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 22nd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conference on Theory and Practice of Software, ETAPS 2019. The 29 papers presented in this volume were carefully reviewed and selected from 85 submissions. They deal with foundational research with a clear significance for software science

    Approximate State Reduction of Fuzzy Finite Automata

    Full text link
    In this paper we introduce a new type of approximate state reductions where the behaviors of the reduced and the original automaton do not have to be identical, but they must match on all words of length less than or equal to some given natural number. We provide four methods for performing such reductions.Comment: In Proceedings AFL 2023, arXiv:2309.0112

    Presburger Arithmetic: From Automata to Formulas

    Full text link
    Presburger arithmetic is the first-order theory of the integers with addition and ordering, but without multiplication. This theory is decidable and the sets it definesadmit several different representations, including formulas, generators, and finiteautomata, the latter being the focus of this thesis. Finite-automata representations of Presburger sets work by encoding numbers as words and sets by automata-defined languages. With this representation, set operations are easily computableas automata operations, and minimized deterministic automata are a canonicalrepresentation of Presburger sets. However, automata-based representations are somewhat opaque and do not allow all operations to be performed efficiently. Anideal situation would be to be able to move easily between formula-based andautomata-based representations but, while building an automaton from a formulais a well understood process, moving the other way is a much more difcult problem that has only attracted attention fairly recently.The main results of this thesis are new algorithms for extracting informationabout Presburger-definable sets represented by finite automata. More precisely, we present algorithms that take as input a finite-automaton representing a Presburgerdefinable set S and compute in polynomial time the affine hull over Qor over Z of the set S, i.e., the smallest set defined by a conjunction of linearequations (and congruence relations in Z) which includes S. Also, we presentan algorithm that takes as input a deterministic finite-automaton representing theinteger elements of a polyhedron P and computes a quantifier-free formula correspondingto this set.The algorithms rely on a very detailed analysis of the scheme used for encodinginteger vectors and this analysis sheds light on some structural properties offinite-automata representing Presburger definable sets.The algorithms presented have been implemented and the results are encouraging: automata with more than 100000 states are handled in seconds

    Model theory of partially random structures

    Get PDF
    Many interactions between mathematical objects, e.g. the interaction between the set of primes and the additive structure of N, can be usefully thought of as random modulo some obvious obstructions. In the first part of this thesis, we document several such situations, show that the randomness in these interactions can be captured using first-order logic, and deduce in consequence many model-theoretic properties of the corresponding structures. The second part of this thesis develops a framework to study the aforementioned situations uniformly, shows that many examples of interest in model theory fit into this framework, and recovers many known model-theoretic phenomena about these examples from our results

    Automatic program analysis using Max-SMT

    Get PDF
    This thesis addresses the development of techniques to build fully-automatic tools for analyzing sequential programs written in imperative languages like C or C++. In order to do the reasoning about programs, the approach taken in this thesis follows the constraint-based method used in program analysis. The idea of the constraint-based method is to consider a template for candidate invariant properties, e.g., linear conjunctions of inequalities. These templates involve both program variables as well as parameters whose values are initially unknown and have to be determined so as to ensure invariance. To this end, the conditions on inductive invariants are expressed by means of constraints (hence the name of the approach) on the unknowns. Any solution to these constraints then yields an invariant. In particular, if linear inequalities are taken as target invariants, conditions can be transformed into arithmetic constraints over the unknowns by means of Farkas' Lemma. In the general case, a Satisfiability Modulo Theories (SMT) problem over non-linear arithmetic is obtained, for which effective SMT solvers exist. One of the novelties of this thesis is the presentation of an optimization version of the SMT problems generated by the constraint-based method in such a way that, even when they turn out to be unsatisfiable, some useful information can be obtained for refining the program analysis. In particular, we show in this work how our approach can be exploited for proving termination of sequential programs, disproving termination of non-deterministic programs, and do compositional safety verification. Besides, an extension of the constraint-based method to generate universally quantified array invariants is also presented. Since the development of practical methods is a priority in this thesis, all the techniques have been implemented and tested with examples coming from academic and industrial environments. The main contributions of this thesis are summarized as follows: 1. A new constraint-based method for the generation of universally quantified invariants of array programs. We also provide extensions of the approach for sorted arrays. 2. A novel Max-SMT-based technique for proving termination. Thanks to expressing the generation of a ranking function as a Max-SMT optimization problem where constraints are assigned different weights, quasi-ranking functions -functions that almost satisfy all conditions for ensuring well-foundedness- are produced in a lack of ranking functions. Moreover, Max-SMT makes it easy to combine the process of building the termination argument with the usually necessary task of generating supporting invariants. 3. A Max-SMT constraint-based approach for proving that programs do not terminate. The key notion of the approach is that of a quasi-invariant, which is a property such that if it holds at a location during execution once, then it continues to hold at that location from then onwards. Our technique considers for analysis strongly connected subgraphs of a program's control flow graph and thus produces more generic witnesses of non-termination than existing methods. Furthermore, it can handle programs with unbounded non-determinism. 4. An automated compositional program verification technique for safety properties based on quasi-invariants. For a given program part (e.g., a single loop) and a postcondition, we show how to, using a Max-SMT solver, an inductive invariant together with a precondition can be synthesized so that the precondition ensures the validity of the invariant and that the invariant implies the postcondition. From this, we build a bottom-up program verification framework that propagates preconditions of small program parts as postconditions for preceding program parts. The method recovers from failures to prove validity of a precondition, using the obtained intermediate results to restrict the search space for further proof attempts.Esta tesis se centra en el desarrollo de técnicas para construir herramientas altamente automatizadas que analicen programas secuenciales escritos en lenguajes imperativos como C o C++. Para realizar el razonamiento sobre los programas, la aproximación tomada en esta tesis se basa en un conocido método basado en restricciones utilizado en análisis de progamas. La idea de dicho método consiste en considerar plantillas que expresen propiedades invariantes candidatas, p.e., conjunciones de desigualdades lineales. Estas plantillas contienen tanto variables del programa como parámetros cuyos valores son inicialmente desconocidos y tienen que ser determinados para garantizar la invariancia. Para este fin, las condiciones sobre invariantes inductivos son expresadas mediante restricciones sobre los valores desconocidos. Cualquier solución a estas restricciones llevan a un invariante. En particular, si desigualdades lineales son los invariantes objetivo, las condiciones pueden ser transformadas en restricciones aritméticas sobre los valores desconocidos mediante el lema de Farkas. En el caso general, un problema de Satisfactibilidad Modulo Teorías (SMT) sobre aritmética no-lineal es obtenido, para el cual existen resolvedores eficientes. Una de las novedades de esta tesis es la presentación de una versión de optimización de los problemas SMT generados por el método tal que, incluso cuando son insatisfactibles, se puede obtener cierta información útil para refinar el análisis del programa. En particular, en este trabajo se muestra como la aproximación tomada puede usarse para probar terminación de programas, probar la no terminación de programas y realizar verificación por partes de la corrección de programas. Además, también se describe una extensión del método basado en restricciones para generar invariantes universalmente cuantificados sobre arrays. Debido a que el desarrollo de métodos prácticos es una prioridad en esta tesis, todas las técnicas han sido implementadas y probadas con ejemplos extraídos del entorno académico e industrial. Las principales contribuciones de esta tesis pueden resumirse en: 1. Un nuevo método basado en restricciones para la generación de invariantes universalmente cuantificados sobre arrays. También se explica extensiones del método para aplicarlo a arrays ordenados. 2. Un técnica novedosa basada en Max-SMT para probar terminación. Gracias a expresar la generación de funciones de ranking como problemas de optimización Max-SMT, donde a las restricciones se les asigna diferentes pesos, se generan cuasi-funciones de ranking, funciones que casi satisfacen todas las condiciones que garantizan la existencia de una relación bien fundada, en ausencia de funciones de ranking. Además, Max-SMT facilita la combinación del proceso de construcción de un argumento de terminación con la tarea habitualmente necesaria de generar invariantes de apoyo. 3. Un método basado en restricciones y Max-SMT para probar que un programa no termina. El concepto clave del método es el de cuasi-invariante, que es una propiedad tal que si se cumple una vez en un punto del programa durante la ejecución, entonces continúa cumpliendose en ese punto desde entonces en adelante. Nuestra técnica considera en su análisis subgrafos fuertemente conexos del grafo de control de flujo del programa y produce testigos de no terminación más genéricos que otros métodos existentes. Además, es capaz de tratar programas con no determinismo. 4. Una técnica automatizada de verificación por partes de propiedades de corrección de un programa basada en cuasi-invariantes. Dado una parte de un programa (p.e., un único bucle) con una postcondición, se muestra como, usando Max-SMT, puede sintetizarse un invariante inductivo junto a una precondición que garantiza la validez del invariante y que el invariante implica la postcondición. Apartir de esto, se describe una infraestructura de verificación de programas de abajo a arriba que propaga precondiciones

    Handlers of Algebraic Effects

    Get PDF
    Abstract. We present an algebraic treatment of exception handlers and, more generally, introduce handlers for other computational effects representable by an algebraic theory. These include nondeterminism, interactive input/output, concurrency, state, time, and their combinations; in all cases the computation monad is the free-model monad of the theory. Each such handler corresponds to a model of the theory for the effects at hand. The handling construct, which applies a handler to a computation, is based on the one introduced by Benton and Kennedy, and is interpreted using the homomorphism induced by the universal property of the free model. This general construct can be used to describe previously unrelated concepts from both theory and practice.

    On the Factor Refinement Principle and its Implementation on Multicore Architectures

    Get PDF
    The factor refinement principle turns a partial factorization of integers (or polynomi­ als) into a more complete factorization represented by basis elements and exponents, with basis elements that are pairwise coprime. There are lots of applications of this refinement technique such as simplifying systems of polynomial inequations and, more generally, speeding up certain algebraic algorithms by eliminating redundant expressions that may occur during intermediate computations. Successive GCD computations and divisions are used to accomplish this task until all the basis elements are pairwise coprime. Moreover, square-free factorization (which is the first step of many factorization algorithms) is used to remove the repeated patterns from each input element. Differentiation, division and GCD calculation op­ erations are required to complete this pre-processing step. Both factor refinement and square-free factorization often rely on plain (quadratic) algorithms for multipli­ cation but can be substantially improved with asymptotically fast multiplication on sufficiently large input. In this work, we review the working principles and complexity estimates of the factor refinement, in case of plain arithmetic, as well as asymptotically fast arithmetic. Following this review process, we design, analyze and implement parallel adaptations of these factor refinement algorithms. We consider several algorithm optimization techniques such as data locality analysis, balancing subproblems, etc. to fully exploit modern multicore architectures. The Cilk++ implementation of our parallel algorithm based on the augment refinement principle of Bach, Driscoll and Shallit achieves linear speedup for input data of sufficiently large size
    corecore