519 research outputs found

    Efficient Model Checking of Hardware Using Conditioned Slicing

    Get PDF
    AbstractIn this work, we present an abstraction based property verification technique for hardware using conditioned slicing. We handle safety property specifications of the form G(antecedent⇒consequent). We use the antecedent of the properties to create our abstractions, Antecedent Conditioned Slices. We extend conditioned slicing to Hardware Description Languages (HDLs). We provide a theoretical foundation for our conditioned slicing based verification technique. We also present experimental results on the Verilog RTL implementation of the USB 2.0. We demonstrate very high performance gains achieved by our technique when compared to static program slicing, using state-of-the-art model checkers

    Efficient Symmetry Reduction and the Use of State Symmetries for Symbolic Model Checking

    Full text link
    One technique to reduce the state-space explosion problem in temporal logic model checking is symmetry reduction. The combination of symmetry reduction and symbolic model checking by using BDDs suffered a long time from the prohibitively large BDD for the orbit relation. Dynamic symmetry reduction calculates representatives of equivalence classes of states dynamically and thus avoids the construction of the orbit relation. In this paper, we present a new efficient model checking algorithm based on dynamic symmetry reduction. Our experiments show that the algorithm is very fast and allows the verification of larger systems. We additionally implemented the use of state symmetries for symbolic symmetry reduction. To our knowledge we are the first who investigated state symmetries in combination with BDD based symbolic model checking

    When images work faster than words: The integration of content-based image retrieval with the Northumbria Watermark Archive

    Get PDF
    Information on the manufacture, history, provenance, identification, care and conservation of paper-based artwork/objects is disparate and not always readily available. The Northumbria Watermark Archive will incorporate such material into a database, which will be made freely available on the Internet providing an invaluable resource for conservation, research and education. The efficiency of a database is highly dependant on its search mechanism. Text based mechanisms are frequently ineffective when a range of descriptive terminologies might be used i.e. when describing images or translating from foreign languages. In such cases a Content Based Image Retrieval (CBIR) system can be more effective. Watermarks provide paper with unique visual identification characteristics and have been used to provide a point of entry to the archive that is more efficient and effective than a text based search mechanism. The research carried out has the potential to be applied to any numerically large collection of images with distinctive features of colour, shape or texture i.e. coins, architectural features, picture frame profiles, hallmarks, Japanese artists stamps etc. Although the establishment of an electronic archive incorporating a CBIR system can undoubtedly improve access to large collections of images and related data, the development is rarely trouble free. This paper discusses some of the issues that must be considered i.e. collaboration between disciplines; project management; copying and digitising objects; content based image retrieval; the Northumbria Watermark Archive; the use of standardised terminology within a database as well as copyright issues

    Separation of Test-Free Propositional Dynamic Logics over Context-Free Languages

    Full text link
    For a class L of languages let PDL[L] be an extension of Propositional Dynamic Logic which allows programs to be in a language of L rather than just to be regular. If L contains a non-regular language, PDL[L] can express non-regular properties, in contrast to pure PDL. For regular, visibly pushdown and deterministic context-free languages, the separation of the respective PDLs can be proven by automata-theoretic techniques. However, these techniques introduce non-determinism on the automata side. As non-determinism is also the difference between DCFL and CFL, these techniques seem to be inappropriate to separate PDL[DCFL] from PDL[CFL]. Nevertheless, this separation is shown but for programs without test operators.Comment: In Proceedings GandALF 2011, arXiv:1106.081

    Automatic generation of local repairs for boolean programs

    Get PDF
    Abstract-Automatic techniques for software verification focus on obtaining witnesses of program failure. Such counterexamples often fail to localize the precise cause of an error and usually do not suggest a repair strategy. We present an efficient algorithm to automatically generate a repair for an incorrect sequential Boolean program where program correctness is specified using a pre-condition and a post-condition. Our approach draws on standard techniques from predicate calculus to obtain annotations for the program statements. These annotations are then used to generate a synthesis query for each program statement, which if successful, yields a repair. Furthermore, we show that if a repair exists for a given program under specified conditions, our technique is always able to find it

    Model checking: Algorithmic verification and debugging

    Get PDF
    Turing Lecture from the winners of the 2007 ACM A.M. Turing Award.In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model-representing, for example, a hardware or software design-satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem.The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms.The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future.-Moshe Y. Vardi, Editor-in-Chief

    Automatic Verification of Parameterized Data Structures

    Full text link
    Abstract. Verifying correctness of programs operating on data structures has become an integral part of software verification. A method is a program that acts on an input data structure (modeled as a graph) and produces an output data structure. The parameterized correctness problem for such methods can be defined as follows: Given a method and a property of the input graphs, we wish to verify that for all input graphs, parameterized by their size, the output graphs also satisfy the property. We present an automated approach to verify that a given method preserves a given property for a large class of methods. Examples include reversals of linked lists, insertion, deletion and iterative modification of nodes in directed graphs. Our approach draws on machinery from automata theory and temporal logic. For a useful class of data structures and properties, our solution is polynomial in the size of the method and size of the property specification

    De Novo sequences of Haloquadratum walsbyi from Lake Tyrrell, Australia, reveal a aariable genomic landscape

    Get PDF
    Hypersaline systems near salt saturation levels represent an extreme environment, in which organisms grow and survive near the limits of life. One of the abundant members of the microbial communities in hypersaline systems is the square archaeon, Haloquadratum walsbyi. Utilizing a short-read metagenome from Lake Tyrrell, a hypersaline ecosystem in Victoria, Australia, we performed a comparative genomic analysis of H. walsbyi to better understand the extent of variation between strains/subspecies. Results revealed that previously isolated strains/subspecies do not fully describe the complete repertoire of the genomic landscape present in H. walsbyi. Rearrangements, insertions, and deletions were observed for the Lake Tyrrell derived Haloquadratum genomes and were supported by environmental de novo sequences, including shifts in the dominant genomic landscape of the two most abundant strains. Analysis pertaining to halomucins indicated that homologs for this large protein are not a feature common for all species of Haloquadratum. Further, we analyzed ATP-binding cassette transporters (ABC-type transporters) for evidence of niche partitioning between different strains/subspecies. We were able to identify unique and variable transporter subunits from all five genomes analyzed and the de novo environmental sequences, suggesting that differences in nutrient and carbon source acquisition may play a role in maintaining distinct strains/subspecies.Funding for this was provided by the National Science Foundation (NSF) MCB Award no. 0626526 to J. Banfield, E. Allen, and K. Heidelberg

    Sparse Positional Strategies for Safety Games

    Full text link
    We consider the problem of obtaining sparse positional strategies for safety games. Such games are a commonly used model in many formal methods, as they make the interaction of a system with its environment explicit. Often, a winning strategy for one of the players is used as a certificate or as an artefact for further processing in the application. Small such certificates, i.e., strategies that can be written down very compactly, are typically preferred. For safety games, we only need to consider positional strategies. These map game positions of a player onto a move that is to be taken by the player whenever the play enters that position. For representing positional strategies compactly, a common goal is to minimize the number of positions for which a winning player's move needs to be defined such that the game is still won by the same player, without visiting a position with an undefined next move. We call winning strategies in which the next move is defined for few of the player's positions sparse. Unfortunately, even roughly approximating the density of the sparsest strategy for a safety game has been shown to be NP-hard. Thus, to obtain sparse strategies in practice, one either has to apply some heuristics, or use some exhaustive search technique, like ILP (integer linear programming) solving. In this paper, we perform a comparative study of currently available methods to obtain sparse winning strategies for the safety player in safety games. We consider techniques from common knowledge, such as using ILP or SAT (satisfiability) solving, and a novel technique based on iterative linear programming. The results of this paper tell us if current techniques are already scalable enough for practical use.Comment: In Proceedings SYNT 2012, arXiv:1207.055
    • …
    corecore