25 research outputs found

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL

    From Outermost Reduction Semantics to Abstract Machine

    Get PDF
    Reduction semantics is a popular format for small-step operational semantics of deterministic programming languages with computational effects.Each reduction semantics gives rise to a reduction-based normalization function where the reduction sequence is enumerated.Refocusing is a practical way to transform a reduction-based normalization function into a reduction-free one where the reduction sequence is not enumerated.This reduction-free normalization function takes the form of an abstract machine that navigates from one redex site to the next without systematically detouring via the root of the term to enumerate the reduction sequence, in contrast to the reduction-based normalization function.We have discovered that refocusing does not apply as readily for reduction semantics that use an outermost reduction strategy and have overlapping rules where a contractum can be a proper subpart of a redex.In this article, we consider such an outermost reduction semantics with backward-overlapping rules, and we investigate how to apply refocusing to still obtain a reduction-free normalization function in the form of an abstract machine

    On Computational Small Steps and Big Steps: Refocusing for Outermost Reduction

    Get PDF
    We study the relationship between small-step semantics, big-step semantics and abstract machines, for programming languages that employ an outermost reduction strategy, i.e., languages where reductions near the root of the abstract syntax tree are performed before reductions near the leaves.In particular, we investigate how Biernacka and Danvy's syntactic correspondence and Reynolds's functional correspondence can be applied to inter-derive semantic specifications for such languages.The main contribution of this dissertation is three-fold:First, we identify that backward overlapping reduction rules in the small-step semantics cause the refocusing step of the syntactic correspondence to be inapplicable.Second, we propose two solutions to overcome this in-applicability: backtracking and rule generalization.Third, we show how these solutions affect the other transformations of the two correspondences.Other contributions include the application of the syntactic and functional correspondences to Boolean normalization.In particular, we show how to systematically derive a spectrum of normalization functions for negational and conjunctive normalization

    Comprobación de modelos en sistemas concurrentes a partir de su semántica en Maude

    Get PDF
    La comprobación de modelos (model checking) es una técnica automática para verificar si una propiedad se cumple en un sistema concurrente. Maude es un marco lógico de alto rendimiento donde se puede especificar, modelar, ejecutar y analizar —de forma sencilla— otros sistemas. Además, este entorno incluye un comprobador de modelos para verificar propiedades expresadas en lógica temporal lineal. Sin embargo, cuando una propiedad aplicada a un programa —escrito en un lenguaje de programación modelado para Maude— no se cumple, el contraejemplo —generado por el propio sistema— está basado en la semántica del propio Maude, dificultando la tarea de poder seguirlo a la hora de entender el resultado. En esta memoria presentamos la herramienta Selene, un marco genérico que maneja sistemas concurrentes asíncronos de modo que el usuario pueda obtener una versión simplificada de los contraejemplos generados por el comprobador de modelos en Maude tras la realización del análisis sobre programas escritos en otros lenguajes. Para lograrlo se ofrece un kernel para manejar la memoria y los mensajes, elementos que se emplearán en el “informe” final obtenido del contraejemplo. Sobre dicha arquitectura el usuario podrá especificar los detalles de la semántica del lenguaje a manejar. Por último, se analizará cuáles fueron los objetivos iniciales, los resultados obtenidos, los problemas encontrados durante el desarrollo, así como las propuestas y líneas futuras de trabajo que serían deseables para la mejora del proyecto

    Probabilistic program analysis

    Get PDF

    Formal Methods for Constraint-Based Testing and Reversible Debugging in Erlang

    Full text link
    Tesis por compendio[ES] Erlang es un lenguaje de programación funcional con concurrencia mediante paso de mensajes basado en el modelo de actores. Éstas y otras características lo hacen especialmente adecuado para aplicaciones distribuidas en tiempo real acrítico. En los últimos años, la popularidad de Erlang ha aumentado debido a la demanda de servicios concurrentes. No obstante, desarrollar sistemas Erlang libres de errores es un reto considerable. A pesar de que Erlang evita muchos problemas por diseño (por ejemplo, puntos muertos), algunos otros problemas pueden aparecer. En este contexto, las técnicas de testing y depuración basadas en métodos formales pueden ser útiles para detectar, localizar y arreglar errores de programación en Erlang. En esta tesis proponemos varios métodos para testing y depuración en Erlang. En particular, estos métodos están basados en modelos semánticos para concolic testing, pruebas basadas en propiedades, depuración reversible con consistencia causal y repetición reversible con consistencia causal de programas Erlang. Además, probamos formalmente las principales propiedades de nuestras propuestas y diseñamos herramientas de código abierto que implementan estos métodos.[CA] Erlang és un llenguatge de programació funcional amb concurrència mitjançant pas de missatges basat en el model d'actors. Estes i altres característiques el fan especialment adequat per a aplicacions distribuïdes en temps real acrític. En els últims anys, la popularitat d'Erlang ha augmentat degut a la demanda de servicis concurrents. No obstant, desenvolupar sistemes Erlang lliures d'errors és un repte considerable. Encara que Erlang evita molts problemes per disseny (per exemple, punts morts), alguns altres problemes poden aparéixer. En este context, les tècniques de testing y depuració basades en mètodes formals poden ser útils per a detectar, localitzar y arreglar errors de programació en Erlang. En esta tesis proposem diversos mètodes per a testing i depuració en Erlang. En particular, estos mètodes estan basats en models semàntics per a concolic testing, testing basat en propietats, depuració reversible amb consistència causal i repetició reversible amb consistència causal de programes Erlang. A més, provem formalment les principals propietats de les nostres propostes i dissenyem ferramentes de codi obert que implementen estos mètodes.[EN] Erlang is a message-passing concurrent, functional programming language based on the actor model. These and other features make it especially appropriate for distributed, soft real-time applications. In the recent years, Erlang's popularity has increased due to the demand for concurrent services. However, developing error-free systems in Erlang is quite a challenge. Although Erlang avoids many problems by design (e.g., deadlocks), some other problems may appear. Here, testing and debugging techniques based on formal methods may be helpful to detect, locate and fix programming errors in Erlang. In this thesis we propose several methods for testing and debugging in Erlang. In particular, these methods are based on semantics models for concolic testing, property-based testing, causal-consistent reversible debugging and causal-consistent replay debugging of Erlang programs. We formally prove the main properties of our proposals and design open-source tools that implement these methods.Palacios Corella, A. (2020). Formal Methods for Constraint-Based Testing and Reversible Debugging in Erlang [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/139076TESISCompendi

    Faculty Publications and Creative Works 2005

    Get PDF
    Faculty Publications & Creative Works is an annual compendium of scholarly and creative activities of University of New Mexico faculty during the noted calendar year. Published by the Office of the Vice President for Research and Economic Development, it serves to illustrate the robust and active intellectual pursuits conducted by the faculty in support of teaching and research at UNM. In 2005, UNM faculty produced over 1,887 works, including 1,887 scholarly papers and articles, 57 books, 127 book chapters, 58 reviews, 68 creative works and 4 patented works. We are proud of the accomplishments of our faculty which are in part reflected in this book, which illustrates the diversity of intellectual pursuits in support of research and education at the University of New Mexico

    Normalisierung und partielle Auswertung von funktional-logischen Programmen

    Get PDF
    This thesis deals with the development of a normalization scheme and a partial evaluator for the functional logic programming language Curry. The functional logic programming paradigm combines the two most important fields of declarative programming, namely functional and logic programming. While functional languages provide concepts such as algebraic data types, higher-order functions or demanddriven evaluation, logic languages usually support a non-deterministic evaluation and a built-in search for results. Functional logic languages finally combine these two paradigms in an integrated way, hence providing multiple syntactic constructs and concepts to facilitate the concise notation of high-level programs. However, both the variety of syntactic constructs and the high degree of abstraction complicate the translation into efficient target programs. To reduce the syntactic complexity of functional logic languages, a typical compilation scheme incorporates a normalization phase to subsequently replace complex constructs by simpler ones until a minimal language subset is reached. While the individual transformations are usually simple, they also have to be correctly combined to make the syntactic constructs interact in the intended way. The efficiency of normalized programs can then be improved by means of different optimization techniques. A very powerful optimization technique is the partial evaluation of programs. Partial evaluation basically anticipates the execution of certain program fragments at compile time and computes a semantically equivalent program, which is usually more efficient at run time. Since partial evaluation is a fully automatic optimization technique, it can also be incorporated into the normal compilation scheme of programs. Nevertheless, this also requires termination of the optimization process, which establishes one of the main challenges for partial evaluation besides semantic equivalence. In this work we consider the language Curry as a representative of the functional logic programming paradigm. We develop a formal representation of the normalization process of Curry programs into a kernel language, while respecting the interference of different language constructs. We then define the dynamic semantics of this kernel language, before we subsequently develop a partial evaluation scheme and show its correctness and termination. Due to the previously described normalization process, this scheme is then directly applicable to arbitrary Curry programs. Furthermore, the implementation of a practical partial evaluator is sketched based on the partial evaluation scheme, and its applicability and usefulness is documented by a variety of typical partial evaluation examples
    corecore