24,700 research outputs found

    Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis

    Full text link
    Even with impressive advances in automated formal methods, certain problems in system verification and synthesis remain challenging. Examples include the verification of quantitative properties of software involving constraints on timing and energy consumption, and the automatic synthesis of systems from specifications. The major challenges include environment modeling, incompleteness in specifications, and the complexity of underlying decision problems. This position paper proposes sciduction, an approach to tackle these challenges by integrating inductive inference, deductive reasoning, and structure hypotheses. Deductive reasoning, which leads from general rules or concepts to conclusions about specific problem instances, includes techniques such as logical inference and constraint solving. Inductive inference, which generalizes from specific instances to yield a concept, includes algorithmic learning from examples. Structure hypotheses are used to define the class of artifacts, such as invariants or program fragments, generated during verification or synthesis. Sciduction constrains inductive and deductive reasoning using structure hypotheses, and actively combines inductive and deductive reasoning: for instance, deductive techniques generate examples for learning, and inductive reasoning is used to guide the deductive engines. We illustrate this approach with three applications: (i) timing analysis of software; (ii) synthesis of loop-free programs, and (iii) controller synthesis for hybrid systems. Some future applications are also discussed

    Analysing Sanity of Requirements for Avionics Systems (Preliminary Version)

    Full text link
    In the last decade it became a common practice to formalise software requirements to improve the clarity of users' expectations. In this work we build on the fact that functional requirements can be expressed in temporal logic and we propose new sanity checking techniques that automatically detect flaws and suggest improvements of given requirements. Specifically, we describe and experimentally evaluate approaches to consistency and redundancy checking that identify all inconsistencies and pinpoint their exact source (the smallest inconsistent set). We further report on the experience obtained from employing the consistency and redundancy checking in an industrial environment. To complete the sanity checking we also describe a semi-automatic completeness evaluation that can assess the coverage of user requirements and suggest missing properties the user might have wanted to formulate. The usefulness of our completeness evaluation is demonstrated in a case study of an aeroplane control system

    AppLP: A Dialogue on Applications of Logic Programming

    Full text link
    This document describes the contributions of the 2016 Applications of Logic Programming Workshop (AppLP), which was held on October 17 and associated with the International Conference on Logic Programming (ICLP) in Flushing, New York City.Comment: David S. Warren and Yanhong A. Liu (Editors). 33 pages. Including summaries by Christopher Kane and abstracts or position papers by M. Aref, J. Rosenwald, I. Cervesato, E.S.L. Lam, M. Balduccini, J. Lobo, A. Russo, E. Lupu, N. Leone, F. Ricca, G. Gupta, K. Marple, E. Salazar, Z. Chen, A. Sobhi, S. Srirangapalli, C.R. Ramakrishnan, N. Bj{\o}rner, N.P. Lopes, A. Rybalchenko, and P. Tara

    Towards Verified Artificial Intelligence

    Full text link
    Verified artificial intelligence (AI) is the goal of designing AI-based systems that that have strong, ideally provable, assurances of correctness with respect to mathematically-specified requirements. This paper considers Verified AI from a formal methods perspective. We describe five challenges for achieving Verified AI, and five corresponding principles for addressing these challenges

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    An Integrated Design and Verification Methodology for Reconfigurable Multimedia Systems

    Full text link
    Recently a lot of multimedia applications are emerging on portable appliances. They require both the flexibility of upgradeable devices (traditionally software based) and a powerful computing engine (typically hardware). In this context, programmable HW and dynamic reconfiguration allow novel approaches to the migration of algorithms from SW to HW. Thus, in the frame of the Symbad project, we propose an industrial design flow for reconfigurable SoC's. The goal of Symbad consists of developing a system level design platform for hardware and software SoC systems including formal and semi-formal verification techniques.Comment: Submitted on behalf of EDAA (http://www.edaa.com/

    Handling state space explosion in verification of component-based systems: A review

    Full text link
    Component-based design is a different way of constructing systems which offers numerous benefits, in particular, decreasing the complexity of system design. However, deploying components into a system is a challenging and error-prone task. Model checking is one of the reliable methods that automatically and systematically analyse the correctness of a given system. Its brute-force check of the state space significantly expands the level of confidence in the system. Nevertheless, model checking is limited by a critical problem so-called State Space Explosion (SSE). To benefit from model checking, appropriate methods to reduce SSE, is required. In two last decades, a great number of methods to mitigate the state space explosion have been proposed which have many similarities, dissimilarities, and unclear concepts in some cases. This research, firstly, aims at present a review and brief discussion of the methods of handling SSE problem and classify them based on their similarities, principle and characteristics. Second, it investigates the methods for handling SSE problem in verifying Component-based system (CBS) and provides insight into CBS verification limitations that have not been addressed yet. The analysis in this research has revealed the patterns, specific features, and gaps in the state-of-the-art methods. In addition, we identified and discussed suitable methods to soften SSE problem in CBS and underlined the key challenges for future research efforts

    Model Checking Software Programs with First Order Logic Specifications using AIG Solvers

    Full text link
    Static verification techniques leverage Boolean formula satisfiability solvers such as SAT and SMT solvers that operate on conjunctive normal form and first order logic formulae, respectively, to validate programs. They force bounds on variable ranges and execution time and translate the program and its specifications into a Boolean formula. They are limited to programs of relatively low complexity for the following reasons. (1) A small increase in the bounds can cause a large increase in the size of the translated formula. (2) Boolean satisfiability solvers are restricted to using optimizations that apply at the level of the formula. Finally, (3) the Boolean formulae often need to be regenerated with higher bounds to ensure the correctness of the translation. We present a method that uses sequential circuits, Boolean formulae with memory elements and hierarchical structure, and sequential circuit synthesis and verification frameworks to validate programs. (1) Sequential circuits are much more succinct than Boolean formulae with no memory elements and preserve the high-level structure of the program. (2) Encoding the problem as a sequential circuit enables the use of a number of powerful automated analysis techniques that have no counterparts for other Boolean formulae. Our method takes an imperative program with a first order logic specification consisting of a precondition and a postcondition pair, and a bound on the program variable ranges, and produces a sequential circuit with a designated output that is true when the program violates the specification

    Exact Finite-State Machine Identification from Scenarios and Temporal Properties

    Full text link
    Finite-state models, such as finite-state machines (FSMs), aid software engineering in many ways. They are often used in formal verification and also can serve as visual software models. The latter application is associated with the problems of software synthesis and automatic derivation of software models from specification. Smaller synthesized models are more general and are easier to comprehend, yet the problem of minimum FSM identification has received little attention in previous research. This paper presents four exact methods to tackle the problem of minimum FSM identification from a set of test scenarios and a temporal specification represented in linear temporal logic. The methods are implemented as an open-source tool. Three of them are based on translations of the FSM identification problem to SAT or QSAT problem instances. Accounting for temporal properties is done via counterexample prohibition. Counterexamples are either obtained from previously identified FSMs, or based on bounded model checking. The fourth method uses backtracking. The proposed methods are evaluated on several case studies and on a larger number of randomly generated instances of increasing complexity. The results show that the Iterative SAT-based method is the leader among the proposed methods. The methods are also compared with existing inexact approaches, i.e. the ones which do not necessarily identify the minimum FSM, and these comparisons show encouraging results.Comment: 21 pages, 9 figures, 7 tables, accepted to International Journal on Software Tools for Technology Transfer. Major changes: the description and results of the Iterative method were updated, the last sections were restructured, new figures were adde

    Variability Abstractions: Trading Precision for Speed in Family-Based Analyses (Extended Version)

    Full text link
    Family-based (lifted) data-flow analysis for Software Product Lines (SPLs) is capable of analyzing all valid products (variants) without generating any of them explicitly. It takes as input only the common code base, which encodes all variants of a SPL, and produces analysis results corresponding to all variants. However, the computational cost of the lifted analysis still depends inherently on the number of variants (which is exponential in the number of features, in the worst case). For a large number of features, the lifted analysis may be too costly or even infeasible. In this paper, we introduce variability abstractions defined as Galois connections and use abstract interpretation as a formal method for the calculational-based derivation of approximate (abstracted) lifted analyses of SPL programs, which are sound by construction. Moreover, given an abstraction we define a syntactic transformation that translates any SPL program into an abstracted version of it, such that the analysis of the abstracted SPL coincides with the corresponding abstracted analysis of the original SPL. We implement the transformation in a tool, reconfigurator that works on Object-Oriented Java program families, and evaluate the practicality of this approach on three Java SPL benchmarks.Comment: 50 pages, 10 figure
    corecore