103 research outputs found
Modular Synthesis of Sketches Using Models
One problem with the constraint-based approaches to synthesis that have become popular over the last few years is that they only scale to relatively small routines, on the order of a few dozen lines of code. This paper presents a mechanism for modular reasoning that allows us to break larger synthesis problems into small manageable pieces. The approach builds on previous work in the verification community of using high-level specifications and partially interpreted functions (we call them models) in place of more complex pieces of code in order to make the analysis modular.
The main contribution of this paper is to show how to combine these techniques with the counterexample guided synthesis approaches used to efficiently solve synthesis problems. Specifically, we show two new algorithms; one to efficiently synthesize functions that use models, and another one to synthesize functions while ensuring that the behavior of the resulting function will be in the set of behaviors allowed by the model. We have implemented our approach on top of the open-source Sketch synthesis system, and we demonstrate its effectiveness on several Sketch benchmark problems.National Science Foundation (U.S.) (Grant NSF-1116362)National Science Foundation (U.S.) (Grant NSF-1139056)United States. Dept. of Energy (Grant DE-SC0005372
The verification of MDG algorithms in the HOL theorem prover
Formal verification of digital systems is achieved, today, using one of two main approaches: states exploration (mainly model checking and equivalence checking) or deductive reasoning (theorem proving). Indeed, the combination of the two approaches, states exploration and deductive reasoning promises to overcome the limitation and to enhance the capabilities of each. Our research is motivated by this goal. In this thesis, we provide the entire necessary infrastructure (data structure + algorithms) to define high level states exploration in the HOL theorem prover named as MDG-HOL platform. While related work has tackled the same problem by representing primitive Binary Decision Diagram (BDD) operations as inference rules added to the core of the theorem prover, we have based our approach on the Multiway Decision Graphs (MDGs). MDG generalizes ROBDD to represent and manipulate a subset of first-order logic formulae. With MDGs, a data value is represented by a single variable of an abstract type and operations on data are represented in terms of uninterpreted function. Considering MDGs instead of BDDs will raise the abstraction level of what can be verified using a state exploration within a theorem prover. The MDGs embedding is based on the logical formulation of an MDG as a Directed Formulae (DF). The DF syntax is defined as HOL built-in data types. We formalize the basic MDG operations using this syntax within HOL following a deep embedding approach. Such approach ensures the consistency of our embedding. Then, we derive the correctness proof for each MDG basic operator. Based on this platform, the MDG reachability analysis is defined in HOL as a conversion that uses the MDG theory within HOL. Then, we demonstrate the effectiveness of our platform by considering four case studies. Our obtained results show that this verification framework offers a considerable gain in terms of automation without sacrificing CPU time and memory usage compared to automatic model checker tools. Finally, we propose a reduction technique to improve MDGs model checking based on the MDG-HOL platform. The idea is to prune the transition relation of the circuits using pre-proved theorems and lemmas from the specification given at system level. We also use the consistency of the specifications to verify if the reduced model is faithful to the original one. We provide two case studies, the first one is the reduction using SAT-MDG of an Island Tunnel Controller and the second one is the MDG-HOL assume-guarantee reduction of the Look-Aside Interface. The obtained results of our approach offers a considerable gain in terms of heuristics and reduction techniques correctness as to commercial model checking; however a small penalty is paid in terms of CPU time and memory usag
Model checking for a first-order temporal logic using multiway decision graphs
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal
Lessons from Formally Verified Deployed Software Systems (Extended version)
The technology of formal software verification has made spectacular advances,
but how much does it actually benefit the development of practical software?
Considerable disagreement remains about the practicality of building systems
with mechanically-checked proofs of correctness. Is this prospect confined to a
few expensive, life-critical projects, or can the idea be applied to a wide
segment of the software industry?
To help answer this question, the present survey examines a range of
projects, in various application areas, that have produced formally verified
systems and deployed them for actual use. It considers the technologies used,
the form of verification applied, the results obtained, and the lessons that
can be drawn for the software industry at large and its ability to benefit from
formal verification techniques and tools.
Note: a short version of this paper is also available, covering in detail
only a subset of the considered systems. The present version is intended for
full reference.Comment: arXiv admin note: text overlap with arXiv:1211.6186 by other author
A Distributed Security Architecture for Large Scale Systems
This thesis describes the research leading from the conception, through development, to the practical
implementation of a comprehensive security architecture for use within, and as a value-added enhancement
to, the ISO Open Systems Interconnection (OSI) model.
The Comprehensive Security System (CSS) is arranged basically as an Application Layer service but can
allow any of the ISO recommended security facilities to be provided at any layer of the model. It is
suitable as an 'add-on' service to existing arrangements or can be fully integrated into new applications.
For large scale, distributed processing operations, a network of security management centres (SMCs) is
suggested, that can help to ensure that system misuse is minimised, and that flexible operation is provided
in an efficient manner.
The background to the OSI standards are covered in detail, followed by an introduction to security in open
systems. A survey of existing techniques in formal analysis and verification is then presented. The
architecture of the CSS is described in terms of a conceptual model using agents and protocols, followed
by an extension of the CSS concept to a large scale network controlled by SMCs.
A new approach to formal security analysis is described which is based on two main methodologies.
Firstly, every function within the system is built from layers of provably secure sequences of finite state
machines, using a recursive function to monitor and constrain the system to the desired state at all times.
Secondly, the correctness of the protocols generated by the sequences to exchange security information
and control data between agents in a distributed environment, is analysed in terms of a modified temporal
Hoare logic. This is based on ideas concerning the validity of beliefs about the global state of a system
as a result of actions performed by entities within the system, including the notion of timeliness.
The two fundamental problems in number theory upon which the assumptions about the security of the
finite state machine model rest are described, together with a comprehensive survey of the very latest
progress in this area. Having assumed that the two problems will remain computationally intractable in
the foreseeable future, the method is then applied to the formal analysis of some of the components of the
Comprehensive Security System.
A practical implementation of the CSS has been achieved as a demonstration system for a network of IBM
Personal Computers connected via an Ethernet LAN, which fully meets the aims and objectives set out
in Chapter 1. This implementation is described, and finally some comments are made on the possible
future of research into security aspects of distributed systems.IBM (United Kingdom) Laboratories
Hursley Park, Winchester, U
Recommended from our members
Heat Dissipation Bounds for Nanocomputing: Methodology and Applications
Heat dissipation is a critical challenge facing the realization of emerging nanocomputing technologies. There are different components of this dissipation, and a part of it comes from the unavoidable cost of implementing logically irreversible operations. This stems from the fact that information is physical and manipulating it irreversibly requires energy. The unavoidable dissipative cost of losing information irreversibly fixes the fundamental limit on the minimum energy cost for computational strategies that utilize ubiquitous irreversible information processing.
A relation between the amount of irreversible information loss in a circuit and the associated energy dissipation was formulated by Landauer\u27s Principle in a technology-independent form. In a computing circuit, in addition to the nformation-theoretic dissipation, other physical processes that take place in association with irreversible information loss may also have an unavoidable thermodynamic cost that originates from the structure and operation of the circuit. In conventional CMOS circuits such unavoidable costs constitute only a minute fraction of the total power budget, however, in nanocircuits, it may be of critical significance due to the high density and operation speeds required. The lower bounds on energy, when obtained by considering the irreversible information cost as well as unavoidable costs associated with the operation of the underlying computing paradigm, may provide insight into the fundamental limitations of emerging technologies. This motivates us to study the problem of determining heat dissipation of computation in a way that reveals fundamental lower bounds on the energy cost for circuits realized in new computing paradigms.
In this work, we propose a physical-information-theoretic methodology that enables us to obtain such bounds for the minimum energy requirements of computation for concrete circuits realized within specific paradigms, and illustrate its application via prominent nanacomputing proposals. We begin by introducing the unavoidable heat dissipation problem and emphasize the significance of limitations it imposes on emerging technologies. We present the methodology developed to obtain the lower bounds on the unavoidable dissipation cost of computation for nanoelectronic circuits. We demonstrate our methodology via its application to various non-transistor-based (e.g. QCA) and transistor-based (e.g. NASIC) nanocomputing circuits. We also employ two CMOS circuits, in order to provide further insight into the application of our methodology by using this well-known conventional paradigm. We expand our methodology to modularize the dissipation analysis for QCA and NASIC paradigms, and discuss prospects for automation. We also revisit key concepts in thermodynamics of computation by focusing on the criticisms raised against the validity of Landauer\u27s Principle. We address these arguments and discuss their implications for our methodology. We conclude by elaborating possible directions towards which this work can be expanded
From Resilience-Building to Resilience-Scaling Technologies: Directions -- ReSIST NoE Deliverable D13
This document is the second product of workpackage WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellence. The problem that ReSIST addresses is achieving sufficient resilience in the immense systems of ever evolving networks of computers and mobile devices, tightly integrated with human organisations and other technology, that are increasingly becoming a critical part of the information infrastructure of our society. This second deliverable D13 provides a detailed list of research gaps identified by experts from the four working groups related to assessability, evolvability, usability and diversit
Workshop on Database Programming Languages
These are the revised proceedings of the Workshop on Database Programming Languages held at Roscoff, Finistère, France in September of 1987. The last few years have seen an enormous activity in the development of new programming languages and new programming environments for databases. The purpose of the workshop was to bring together researchers from both databases and programming languages to discuss recent developments in the two areas in the hope of overcoming some of the obstacles that appear to prevent the construction of a uniform database programming environment. The workshop, which follows a previous workshop held in Appin, Scotland in 1985, was extremely successful. The organizers were delighted with both the quality and volume of the submissions for this meeting, and it was regrettable that more papers could not be accepted. Both the stimulating discussions and the excellent food and scenery of the Brittany coast made the meeting thoroughly enjoyable.
There were three main foci for this workshop: the type systems suitable for databases (especially object-oriented and complex-object databases,) the representation and manipulation of persistent structures, and extensions to deductive databases that allow for more general and flexible programming. Many of the papers describe recent results, or work in progress, and are indicative of the latest research trends in database programming languages.
The organizers are extremely grateful for the financial support given by CRAI (Italy), Altaïr (France) and AT&T (USA). We would also like to acknowledge the organizational help provided by Florence Deshors, Hélène Gans and Pauline Turcaud of Altaïr, and by Karen Carter of the University of Pennsylvania
- …