1,189 research outputs found

    Verifying Recursive Active Documents with Positive Data Tree Rewriting

    Get PDF
    This paper proposes a data tree-rewriting framework for modeling evolving documents. The framework is close to Guarded Active XML, a platform used for handling XML repositories evolving through web services. We focus on automatic verification of properties of evolving documents that can contain data from an infinite domain. We establish the boundaries of decidability, and show that verification of a {\em positive} fragment that can handle recursive service calls is decidable. We also consider bounded model-checking in our data tree-rewriting framework and show that it is \nexptime-complete

    Tree Regular Model Checking for Lattice-Based Automata

    Get PDF
    Tree Regular Model Checking (TRMC) is the name of a family of techniques for analyzing infinite-state systems in which states are represented by terms, and sets of states by Tree Automata (TA). The central problem in TRMC is to decide whether a set of bad states is reachable. The problem of computing a TA representing (an over- approximation of) the set of reachable states is undecidable, but efficient solutions based on completion or iteration of tree transducers exist. Unfortunately, the TRMC framework is unable to efficiently capture both the complex structure of a system and of some of its features. As an example, for JAVA programs, the structure of a term is mainly exploited to capture the structure of a state of the system. On the counter part, integers of the java programs have to be encoded with Peano numbers, which means that any algebraic operation is potentially represented by thousands of applications of rewriting rules. In this paper, we propose Lattice Tree Automata (LTAs), an extended version of tree automata whose leaves are equipped with lattices. LTAs allow us to represent possibly infinite sets of interpreted terms. Such terms are capable to represent complex domains and related operations in an efficient manner. We also extend classical Boolean operations to LTAs. Finally, as a major contribution, we introduce a new completion-based algorithm for computing the possibly infinite set of reachable interpreted terms in a finite amount of time.Comment: Technical repor

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Detecting Redundant CSS Rules in HTML5 Applications: A Tree-Rewriting Approach

    Get PDF
    HTML5 applications normally have a large set of CSS (Cascading Style Sheets) rules for data display. Each CSS rule consists of a node selector (given in an XPath-like query language) and a declaration block (assigning values to selected nodes' display attributes). As web applications evolve, maintaining CSS files can easily become problematic. Some CSS rules will be replaced by new ones, but these obsolete (hence redundant) CSS rules often remain in the applications. Not only does this "bloat" the applications, but it also significantly increases web browsers' processing time. Most works on detecting redundant CSS rules in HTML5 applications do not consider the dynamic behaviors of HTML5 (specified in JavaScript); in fact, the only proposed method that takes these into account is dynamic analysis (a.k.a. testing), which cannot soundly prove redundancy of CSS rules. In this paper, we introduce an abstraction of HTML5 applications based on monotonic tree-rewriting and study its "redundancy problem". We establish the precise complexity of the problem and various subproblems of practical importance (ranging from P to EXP). In particular, our algorithm relies on an efficient reduction to an analysis of symbolic pushdown systems (for which highly optimised solvers are available), which yields a fast method for checking redundancy in practice. We implemented our algorithm and demonstrated its efficacy in detecting redundant CSS rules in HTML5 applications.Comment: 50 page

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Rule-based Methodologies for the Specification and Analysis of Complex Computing Systems

    Full text link
    Desde los orígenes del hardware y el software hasta la época actual, la complejidad de los sistemas de cálculo ha supuesto un problema al cual informáticos, ingenieros y programadores han tenido que enfrentarse. Como resultado de este esfuerzo han surgido y madurado importantes áreas de investigación. En esta disertación abordamos algunas de las líneas de investigación actuales relacionada con el análisis y la verificación de sistemas de computación complejos utilizando métodos formales y lenguajes de dominio específico. En esta tesis nos centramos en los sistemas distribuidos, con un especial interés por los sistemas Web y los sistemas biológicos. La primera parte de la tesis está dedicada a aspectos de seguridad y técnicas relacionadas, concretamente la certificación del software. En primer lugar estudiamos sistemas de control de acceso a recursos y proponemos un lenguaje para especificar políticas de control de acceso que están fuertemente asociadas a bases de conocimiento y que proporcionan una descripción sensible a la semántica de los recursos o elementos a los que se accede. También hemos desarrollado un marco novedoso de trabajo para la Code-Carrying Theory, una metodología para la certificación del software cuyo objetivo es asegurar el envío seguro de código en un entorno distribuido. Nuestro marco de trabajo está basado en un sistema de transformación de teorías de reescritura mediante operaciones de plegado/desplegado. La segunda parte de esta tesis se concentra en el análisis y la verificación de sistemas Web y sistemas biológicos. Proponemos un lenguaje para el filtrado de información que permite la recuperación de informaciones en grandes almacenes de datos. Dicho lenguaje utiliza información semántica obtenida a partir de ontologías remotas para re nar el proceso de filtrado. También estudiamos métodos de validación para comprobar la consistencia de contenidos web con respecto a propiedades sintácticas y semánticas. Otra de nuestras contribuciones es la propuesta de un lenguaje que permite definir y comprobar automáticamente restricciones semánticas y sintácticas en el contenido estático de un sistema Web. Finalmente, también consideramos los sistemas biológicos y nos centramos en un formalismo basado en lógica de reescritura para el modelado y el análisis de aspectos cuantitativos de los procesos biológicos. Para evaluar la efectividad de todas las metodologías propuestas, hemos prestado especial atención al desarrollo de prototipos que se han implementado utilizando lenguajes basados en reglas.Baggi ., M. (2010). Rule-based Methodologies for the Specification and Analysis of Complex Computing Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8964Palanci

    Formal Models and Techniques for Analyzing Security Protocols: A Tutorial

    Get PDF
    International audienceSecurity protocols are distributed programs that aim at securing communications by the means of cryptography. They are for instance used to secure electronic payments, home banking and more recently electronic elections. Given The financial and societal impact in case of failure, and the long history of design flaws in such protocol, formal verification is a necessity. A major difference from other safety critical systems is that the properties of security protocols must hold in the presence of an arbitrary adversary. The aim of this paper is to provide a tutorial to some modern approaches for formally modeling protocols, their goals and automatically verifying them

    The bigwig Project

    Get PDF
    We present the results of the <bigwig> project, which aims to design and implement a high-level domain-specific language for programming interactive Web services. A fundamental aspect of the development of the World Wide Web during the last decade is the gradual change from static to dynamic generation of Web pages. Generating Web pages dynamically in dialogue with the client has the advantage of providing up-to-date and tailor-made information. The development of systems for constructing such dynamic Web services has emerged as a whole new research area. The <bigwig> language is designed by analyzing its application domain and identifying fundamental aspects of Web services inspired by problems and solutions in existing Web service development languages. The core of the design consists of a session-centered service model together with a flexible template-based mechanism for dynamic Web page construction. Using specialized program analyses, certain Web specific properties are verified at compile-time, for instance that only valid HTML 4.01 is ever shown to the clients. In addition, the design provides high-level solutions to form field validation, caching of dynamic pages, and temporal-logic based concurrency control, and it proposes syntax macros for making highly domain-specific languages. The language is implemented via widely available Web technologies, such as Apache on the server-side and JavaScript and Java Applets on the client-side. We conclude with experience and evaluation of the project

    Implementing Compositional Analysis Using Intersection Types With Expansion Variables

    Get PDF
    AbstractA program analysis is compositional when the analysis result for a particular program fragment is obtained solely from the results for its immediate subfragments via some composition operator. This means the subfragments can be analyzed independently in any order. Many commonly used program analysis techniques (in particular, most abstract interpretations and most uses of the Hindley/Milner type system) are not compositional and require the entire text of a program for sound and complete analysis.System I is a recent type system for the pure λ-calculus with intersection types and the new technology of expansion variables. System I supports compositional analysis because it has the principal typings property and an algorithm based on the new technology of β-unification has been developed that finds these principal typings. In addition, for each natural number k, typability in the rank-k restriction of System I is decidable, so a complete and terminating analysis algorithm exists for the rank-k restriction.This paper presents new understanding that has been gained from working with multiple implementations of System I and β-unification-based analysis algorithms. The previous literature on System I presented the type system in a way that helped in proving its more important theoretical properties, but was not as easy to follow as it could be. This paper provides a presentation of many aspects of System I that should be clearer as well as a discussion of important implementation issues
    corecore