89 research outputs found

    Splitting Proofs for Interpolation

    Full text link
    We study interpolant extraction from local first-order refutations. We present a new theoretical perspective on interpolation based on clearly separating the condition on logical strength of the formula from the requirement on the com- mon signature. This allows us to highlight the space of all interpolants that can be extracted from a refutation as a space of simple choices on how to split the refuta- tion into two parts. We use this new insight to develop an algorithm for extracting interpolants which are linear in the size of the input refutation and can be further optimized using metrics such as number of non-logical symbols or quantifiers. We implemented the new algorithm in first-order theorem prover VAMPIRE and evaluated it on a large number of examples coming from the first-order proving community. Our experiments give practical evidence that our work improves the state-of-the-art in first-order interpolation.Comment: 26th Conference on Automated Deduction, 201

    An incremental modular technique for checking LTL-X properties on Petri nets

    Get PDF
    Model-checking is a powerful and widespread technique for the verification of finite state concurrent systems. However, the main hindrance for wider application of this technique is the well-known state explosion problem. Modular verification is a promising natural approach to tackle this problem. It is based on the "divide and conquer" principle and aims at deducing the properties of the system from those of its components analysed in isolation. Unfortunately, several issues make the use of modular verification techniques difficult in practice. First, deciding how to partition the system into components is not trivial and can have a significant impact on the resources needed for verification. Second, when model-checking a component in isolation, how should the environment of this component be described? In this paper, we address these problems in the framework of model-checking LTL\X action-based properties on Petri nets. We propose an incremental and modular verification approach where the system model is partitioned according to the actions occurring in the property to be verified and where the environment of a component is taken into account using the linear place invariants of the system

    Verification and synthesis for stochastic systems with temporal logic specifications

    Get PDF
    The objective of this thesis is to first provide a formal framework for the verification of discrete-time, continuous-space stochastic systems with complex temporal specifications. Secondly, the approach developed for verification is extended to the synthesis of controllers that aim to maximize or minimize the probability of occurrence of temporal behaviors in stochastic systems. As these problems are generally undecidable or intractable to solve, approximation methods are employed in the form of finite-state abstractions arising from a partition of the original system’s domain for which analysis is greatly simplified. The abstractions of choice in this work are Interval-valued Markov Chains (IMC) which, unlike conventional discrete-time Markov Chains, allow for a non-deterministic range of probabilities of transition between states instead of a fixed probability. Techniques for constructing IMC abstractions for two classes of systems are presented. Due to their inherent structure that facilitates estimations of reachable sets, mixed monotone systems with additive disturbances are shown to be efficiently amenable to IMC abstractions. Then, an abstraction procedure for polynomial systems that uses stochastic barrier functions computed via Sum-of-Squares programming is derived. Next, an algorithm for computing satisfaction bounds in IMCs with respect to so-called omega-regular properties is detailed. As probabilistic specifications require finding the set of initial states whose probability of fulfilling some behavior is below or above a certain threshold, this method may yield a set of states whose satisfaction status is undecided. An iterative specification-guided partition refinement method is proposed to reduce conservatism in the abstraction until a precision threshold is met. Finally, similar interval-based finite abstractions are utilized to synthesize control policies for omega-regular objectives in systems with both a finite number of modes and a continuous set of available inputs. A notion of optimality for these policies is introduced and a partition refinement scheme is presented to reach a desired level of optimality.Ph.D

    Experimenting with Constraint Programming Techniques in Artificial Intelligence: Automated System Design and Verification of Neural Networks

    Get PDF
    This thesis focuses on the application of Constraint Satisfaction and Optimization techniques in two Artificial Intelligence (AI) domains: automated design of elevator systems and verification of Neural Networks (NNs). The three main areas of interest for my work are (i) the languages for defining the constraints for the systems, (ii) the algorithms and encodings that enable solving the problems considered and (iii) the tools that implement such algorithms. Given the expressivity of the domain description languages and the availability of effective tools, several problems in diverse application fields have been solved successfully using constraint satisfaction techniques. The two case studies herewith presented are no exception, even if they entail different challenges in the adoption of such techniques. Automated design of elevator systems not only requires encoding of feasibility (hard) constraints, but should also take into account design preferences, which can be expressed in terms of cost functions whose optimal or near-optimal value characterizes “good” design choices versus “poor” ones. Verification of NNs (and other machine-learned implements) requires solving large-scale constraint problems which may become the main bottlenecks in the overall verification procedure. This thesis proposes some ideas for tackling such challenges, including encoding techniques for automated design problems and new algorithms for handling the optimization problems arising from verification of NNs. The proposed algorithms and techniques are evaluated experimentally by developing tools that are made available to the research community for further evaluation and improvement

    Data re-engineering using formal transformations

    Get PDF
    This thesis presents and analyses a solution to the problem of formally re- engineering program data structures, allowing new representations of a program to be developed. The work is based around Ward's theory of program transformations which uses a Wide Spectrum Language, WSL, whose semantics were specially developed for use in proof of program transformations. The re-engineered code exhibits equivalent functionality to the original but differs in the degree of data abstraction and representation. Previous transformational re-engineering work has concentrated upon control flow restructuring, which has highlighted a lack of support for data restructuring in the maintainer's tool-set. Problems have been encountered during program transformation due to the lack of support for data re-engineering. A lack of strict data semantics and manipulation capabilities has left the maintainer unable to produce optimally re-engineered solutions. It has also hindered the migration of programs into other languages because it has not been possible to convert data structures into an appropriate form in the target language. The main contribution of the thesis is the Data Re-Engineering and Abstraction Mechanism (DREAM) which allows theories about type equivalence to be represented and used in a re-engineering environment. DREAM is based around the technique of "ghosting", a way of introducing different representations of data, which provides the theoretical underpinning of the changes applied to the program. A second major contribution is the introduction of data typing into the WSL language. This allows DREAM to be integrated into the existing transformation theories within WSL. These theoretical extensions of the original work have been shown to be practically viable by implementation within a prototype transformation tool, the Maintainer's Assistant. The extended tool has been used to re-engineer heavily modified, commercial legacy code. The results of this have shown that useful re-engineering work can be performed and that DREAM integrates well with existing control flow transformations

    Un environnement de spécification et de découverte pour la réutilisation des composants logiciels dans le développement des logiciels distribués

    Get PDF
    Notre travail vise à élaborer une solution efficace pour la découverte et la réutilisation des composants logiciels dans les environnements de développement existants et couramment utilisés. Nous proposons une ontologie pour décrire et découvrir des composants logiciels élémentaires. La description couvre à la fois les propriétés fonctionnelles et les propriétés non fonctionnelles des composants logiciels exprimées comme des paramètres de QoS. Notre processus de recherche est basé sur la fonction qui calcule la distance sémantique entre la signature d'un composant et la signature d'une requête donnée, réalisant ainsi une comparaison judicieuse. Nous employons également la notion de " subsumption " pour comparer l'entrée-sortie de la requête et des composants. Après sélection des composants adéquats, les propriétés non fonctionnelles sont employées comme un facteur distinctif pour raffiner le résultat de publication des composants résultats. Nous proposons une approche de découverte des composants composite si aucun composant élémentaire n'est trouvé, cette approche basée sur l'ontologie commune. Pour intégrer le composant résultat dans le projet en cours de développement, nous avons développé l'ontologie d'intégration et les deux services " input/output convertor " et " output Matching ".Our work aims to develop an effective solution for the discovery and the reuse of software components in existing and commonly used development environments. We propose an ontology for describing and discovering atomic software components. The description covers both the functional and non functional properties which are expressed as QoS parameters. Our search process is based on the function that calculates the semantic distance between the component interface signature and the signature of a given query, thus achieving an appropriate comparison. We also use the notion of "subsumption" to compare the input/output of the query and the components input/output. After selecting the appropriate components, the non-functional properties are used to refine the search result. We propose an approach for discovering composite components if any atomic component is found, this approach based on the shared ontology. To integrate the component results in the project under development, we developed the ontology integration and two services " input/output convertor " and " output Matching "

    Model-Based Runtime Adaptation of Resource Constrained Devices

    Get PDF
    Dynamic Software Product Line (DSPL) engineering represents a promising approach for planning and applying runtime reconfiguration scenarios to self-adaptive software systems. Reconfigurations at runtime allow those systems to continuously adapt themselves to ever changing contextual requirements. With a systematic engineering approach such as DSPLs, a self-adaptive software system becomes more reliable and predictable. However, applying DSPLs in the vital domain of highly context-aware systems, e.g., mobile devices such as smartphones or tablets, is obstructed by the inherently limited resources. Therefore, mobile devices are not capable to handle large, constrained (re-)configuration spaces of complex self-adaptive software systems. The reconfiguration behavior of a DSPL is specified via so called feature models. However, the derivation of a reconfiguration based on a feature model (i) induces computational costs and (ii) utilizes the available memory. To tackle these drawbacks, I propose a model-based approach for designing DSPLs in a way that allows for a trade-off between pre-computation of reconfiguration scenarios at development time and on-demand evolution at runtime. In this regard, I intend to shift computational complexity from runtime to development time. Therefore, I propose the following three techniques for (1) enriching feature models with context information to reason about potential contextual changes, (2) reducing a DSPL specification w.r.t. the individual characteristics of a mobile device, and (3) specifying a context-aware reconfiguration process on the basis of a scalable transition system incorporating state space abstractions and incremental refinements at runtime. In addition to these optimization steps executed prior to runtime, I introduce a concept for (4) reducing the operational costs utilized by a reconfiguration at runtime on a long-term basis w.r.t. the DSPL transition system deployed on the device. To realize this concept, the DSPL transition system is enriched with non-functional properties, e.g., costs of a reconfiguration, and behavioral properties, e.g., the probability of a change within the contextual situation of a device. This provides the possibility to determine reconfigurations with minimum costs w.r.t. estimated long-term changes in the context of a device. The concepts and techniques contributed in this thesis are illustrated by means of a mobile device case study. Further, implementation strategies are presented and evaluated considering different trade-off metrics to provide detailed insights into benefits and drawbacks

    Eliciting Security Requirements and Tracing them to Design: An Integration of Common Criteria, Heuristics, and UMLsec

    Get PDF
    Building secure systems is difficult for many reasons. This paper deals with two of the main challenges: (i) the lack of security expertise in development teams, and (ii) the inadequacy of existing methodologies to support developers who are not security experts. The security standard ISO 14508 (Common Criteria) together with secure design techniques such as UMLsec can provide the security expertise, knowledge, and guidelines that are needed. However, security expertise and guidelines are not stated explicitly in the Common Criteria. They are rather phrased in security domain terminology and difficult to understand for developers. This means that some general security and secure design expertise are required to fully take advantage of the Common Criteria and UMLsec. In addition, there is the problem of tracing security requirements and objectives into solution design,which is needed for proof of requirements fulfilment. This paper describes a security requirements engineering methodology called SecReq. SecReq combines three techniques: the Common Criteria, the heuristic requirements editorHeRA, andUMLsec. SecReqmakes systematic use of the security engineering knowledge contained in the Common Criteria and UMLsec, as well as security-related heuristics in the HeRA tool. The integrated SecReq method supports early detection of security-related issues (HeRA), their systematic refinement guided by the Common Criteria, and the ability to trace security requirements into UML design models. A feedback loop helps reusing experiencewithin SecReq and turns the approach into an iterative process for the secure system life-cycle, also in the presence of system evolution
    • …
    corecore