23 research outputs found

    Logic Program Synthesis via Proof Planning

    Get PDF
    We propose a novel approach to automating the synthesis of logic programs: Logic programs are synthesized as a by-product of the planning of a verification proof. The approach is a two-level one: At the object level, we prove program verification conjectures in a sorted, first-order theory. The conjectures are of the form ∀args⃗.  prog(args⃗)↔spec(args⃗)\forall \vec{args}. \; prog(\vec{args}) \leftrightarrow spec(\vec{args}). At the meta-level, we plan the object-level verification with an unspecified program definition. The definition is represented with a (second-order) meta-level variable, which becomes instantiated in the course of the planning. This technique is an application of the Clam proof planning system. Clam is currently powerful enough to plan verification proofs for given programs. We show that, if Clam's use of middle-out reasoning is extended, it will also be able to synthesize programs

    Poly-controlled partial evaluation and its application to resource-aware program specialization

    Get PDF
    La Evaluación Parcial es una técnica automática para la optimización de programas. Su objetivo principal es el de especializar un programa con respecto a parte de sus datos de entrada, los que se conocen como datos estáticos. La calidad del código generado por la evaluación parcial de programas lógicos depende, en gran medida, de la estrategia de control que se haya empleado. Desafortunadamente, aún estamos lejos de contar con una estrategia de control suficientemente sofisticada como para comportarse de manera óptima para cualquier programa. La principal contribución de esta tesis es el desarrollo de la Evaluación Parcial Policontrolada, un novedoso entorno para la evaluación parcial de programas lógicos, el cual es policontrolado en el sentido de que puede tomar en cuenta conjuntos de reglas de control global y local, en lugar de emplear una única combinación predeterminada (como es el caso de la evaluación parcial tradicional). Este entorno es más flexible que los enfoques existentes, ya que permite asignar diferentes reglas de control local y global a diferentes patrones de llamada. De este modo, es posible obtener programas especializados que no pueden ser generados usando evaluación parcial tradicional. En consecuencia, el entorno de evaluación parcial policontrolada puede generar conjuntos de programas especializados, en lugar de un único programa. A través de técnicas auto-ajustables, es posible hacer que este enfoque sea completamente automático. Dichas técnicas permiten medir la calidad de los diferentes programas especializados obtenidos. Este entorno es consciente de los recursos, en el sentido de que cada una de las soluciones obtenidas a través de la evaluación parcial policontrolada es valorada utilizando funciones de adecuación, las que pueden tener en cuenta factores tales como el tamaño de los programas especializados, o la cantidad de memoria que consumen, además de la velocidad del programa especializado que es el factor habitualmente considerado en otros entornos de evaluación parcial. Este entorno de evaluación parcial policontrolada ha sido implementado en el sistema CiaoPP, y evaluado con numerosos programas de prueba. Los resultados experimentales muestran que nuestra propuesta obtiene en muchos casos mejores especializaciones que aquellas generadas usando la evaluación parcial tradicional, especialmente cuando la especialización es consciente de los recursos. Otra de las principales contribuciones de esta tesis es la presentación de una visión unificada del problema de eliminar la polivarianza superflua en la evaluación parcial y en la especialización abstracta múltiple, a través del uso de un paso de minimización, el cual agrupa versiones equivalentes de predicados. Este paso se puede aplicar en la especialización de cualquier programa Prolog, inclusive aquellos que contienen llamadas a predicados predefinidos o predicados externos. Además, ofrecemos la posibilidad de agrupar versiones que no sean estrictamente equivalentes, con el propósito de obtener programas más pequeños

    Source Code Analysis: A Road Map

    Full text link

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The total of 60 regular papers presented in these volumes was carefully reviewed and selected from 155 submissions. The papers are organized in topical sections as follows: Part I: Program verification; SAT and SMT; Timed and Dynamical Systems; Verifying Concurrent Systems; Probabilistic Systems; Model Checking and Reachability; and Timed and Probabilistic Systems. Part II: Bisimulation; Verification and Efficiency; Logic and Proof; Tools and Case Studies; Games and Automata; and SV-COMP 2020

    Twenty years of rewriting logic

    Get PDF
    AbstractRewriting logic is a simple computational logic that can naturally express both concurrent computation and logical deduction with great generality. This paper provides a gentle, intuitive introduction to its main ideas, as well as a survey of the work that many researchers have carried out over the last twenty years in advancing: (i) its foundations; (ii) its semantic framework and logical framework uses; (iii) its language implementations and its formal tools; and (iv) its many applications to automated deduction, software and hardware specification and verification, security, real-time and cyber-physical systems, probabilistic systems, bioinformatics and chemical systems

    SMT-based synthesis of distributed systems

    Full text link

    Synthesis of distributed systems

    Get PDF
    This thesis offers a comprehensive solution of the distributed synthesis problem. It starts with the problem of solving Parity games, which form an integral part of the automata-theoretic synthesis algorithms we use. We improve the known complexity bound for solving parity games with n positions and c colors approximately from O(n^(1/2*c)) to O(n^(1/3*c)), and introduce an accelerated strategy improvement technique that can consider all combinations of local improvements in every update step, selecting the globally optimal combination. We then demonstrate the decidability and finite model property of alternating-time specification languages, and determine the complexity of the satisfiability and synthesis problem for the alternating-time μ-calculus and the temporal logic ATL*. The impact of the architecture, that is, the set of system processes with known (white-box) and unknown (black-box) implementation, and the com- munication structure between them, is determined. We introduce information forks, a simple but comprehensive criterion that characterizes all architectures for which the synthesis problem is undecidable. The information fork crite- rion takes the impact of nondeterminism, the communication topology, and the specification language into account. For decidable architectures, we present an automata-based synthesis algorithm. We introduce bounded synthesis, which deviates from general synthesis by considering only implementations up to a predefined size, and thus avoids the expensive representation of all solutions. We develop a SAT based approach to bounded synthesis, which is nondeterministic quasilinear in the minimal implementation instead of nonelementary in the system specification. We determine the complexity of open synthesis under the assumption of probabilistic or reactive environments. Our automata based approach allows for a seamless integration of the new environment models into the uniform synthesis algorithm. Finally, we study the synthesis problem for asynchronous systems. We show that distributed synthesis remains only decidable for architectures with a single black-box process, and determine the complexity of the synthesis problem for different scheduler types. Furthermore, we combine the undecidability results and synthesis procedures for synchronous and asynchronous systems; systems that are globally asynchronous and locally synchronous are decidable if all black-box components are contained in a single fork-free synchronized component.Diese Dissertation löst das Syntheseproblem für verteilte Systeme. Sie beginnt mit verbesserten Algorithmen zum Lösen von Parity Spielen, die einen integralen Bestandteil der Automaten basierten Synthese bilden. Die bekannte Komplexitätsschranke für das Lösen von Parity Spielen mit n Knoten und c Farben wird von ca. O(n^(1/2*c)) auf ca. O(n^(1/3*c)) verbessert, und es wird eine beschleunigte Strategie Verbesserungsmethode entwickelt, die, in jedem Schritt, die optimale Kombination aller lokalen Verbesserungen findet. Die Entscheidbarkeit alternierender Logiken wird gezeigt, und die Komplexität des Erfüllbarkeits- und Syntheseproblems für das Alternierende µ-Kalkül (EXPTIME-vollständig) und die Temporallogik ATL* (2EXPTIME-vollständig) bestimmt. Der Einfluss der Systemarchitektur, der Spezifikationssprache und, damit verbunden, des Implementierungsmodells (deterministisch vs. nichtdeterministisch) auf die Entscheidbarkeit und Komplexität des Syntheseproblems wird herausgearbeitet. Es wird gezeigt, dass die Klasse der entscheidbaren Architekturen durch die Abwesenheit von Information Forks, einem einfachen und leicht prüfbaren Kriterium auf der Kommunikationsarchitektur, vollständig beschrieben werden kann. Für entscheidbare Architekturen wird ein einheitliches Automaten basiertes Syntheseverfahren entwickelt. Darüber hinaus wird ein SAT basiertes Verfahren entwickelt, dass die Repräsentation aller Lösungen in einem Automaten umgeht. Die Komplexität des SAT basierten Verfahrens ist nichtdeterministisch quasilinear in der Größe des minimalen Modells, statt nicht-elementar in der Größe der Spezifikation. Für probabilistische und reaktive Umgebungen wird die Komplexität des offenen Syntheseproblems bestimmt, und jeweils ein Automaten basiertes Syntheseverfahren entwickelt, dass sich nahtlos in das Syntheseverfahren für verteilte Systeme integrieren lässt. Ferner wird gezeigt, dass verteilte Synthese für asynchrone Systeme nur dann entscheidbar bleibt, wenn lediglich die Implementierung einer Komponente konstruiert werden soll. Schließlich werden die Entscheidbarkeitsresultate und Synthese Algorithmen für synchrone und asynchrone Modelle zusammengeführt: Global asynchrone lokal synchrone Systeme sind entscheidbar, wenn alle zu synthetisierenden Prozesse in der gleichen synchronisierten Komponente liegen, und diese Komponente keine Information Forks enthält
    corecore