3,119 research outputs found

    Towards a generation-based semantic web authoring tool

    Get PDF
    Widespread use of Semantic Web technologies requires interfaces through which knowledge can be viewed and edited without deep understanding of Description Logic and formalisms like OWL and RDF. Several groups are pursuing approaches based on Controlled Natural Languages (CNLs), so that editing can be performed by typing in sentences which are automatically interpreted as statements in OWL. We suggest here a variant of this approach which relies entirely on Natural Language Generation (NLG), and propose requirements for a system that can reliably generate transparent realisations of statements in Description Logic

    What is the method in applying formal methods to PLC applications?

    Get PDF
    The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system development should be made very clear. Developers and customers ought to understand very well what they can rely on or not, and we see our task in trying to make this explicit. Therefore, for us the answer to the question above leads to the following questions: Which parts of the system can be treated formally? What formal methods and tools can be applied? What does their successful application tell (or does not) about the proper functioning of the whole system

    Automated verification of shape, size and bag properties.

    Get PDF
    In recent years, separation logic has emerged as a contender for formal reasoning of heap-manipulating imperative programs. Recent works have focused on specialised provers that are mostly based on fixed sets of predicates. To improve expressivity, we have proposed a prover that can automatically handle user-defined predicates. These shape predicates allow programmers to describe a wide range of data structures with their associated size properties. In the current work, we shall enhance this prover by providing support for a new type of constraints, namely bag (multi-set) constraints. With this extension, we can capture the reachable nodes (or values) inside a heap predicate as a bag constraint. Consequently, we are able to prove properties about the actual values stored inside a data structure

    Rodin: an open toolset for modelling and reasoning in Event-B

    No full text
    Event-B is a formal method for system-level modelling and analysis. Key features of Event-B are the use of set theory as a modelling notation, the use of refinement to represent systems at different abstraction levels and the use of mathematical proof to verify consistency between refinement levels. In this article we present the Rodin modelling tool that seamlessly integrates modelling and proving. We outline how the Event-B language was designed to facilitate proof and how the tool has been designed to support changes to models while minimising the impact of changes on existing proofs. We outline the important features of the prover architecture and explain how well-definedness is treated. The tool is extensible and configurable so that it can be adapted more easily to different application domains and development methods

    Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness

    Full text link
    In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclassifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and ({\delta}-)complete decision procedure. Our method also employs a data-driven approach to learn a verification policy that guides abstract interpretation during proof search. We have implemented the proposed approach in a tool called Charon and experimentally evaluated it on hundreds of benchmarks. Our experiments show that the proposed approach significantly outperforms three state-of-the-art tools, namely AI^2 , Reluplex, and Reluval
    corecore