29 research outputs found

    Proceedings of the 11th International Conference, TPHOLs’98 Canberra, Australia, September–October 1998. Supplementary Proceedings

    No full text
    Mechanical theorem provers for higher order logics have been successfully applied in many areas including hardware verification and synthesis; verification of security and communications protocols; software verification, transformation and refinement; compiler construction; and concurrency. The higher order logics used to reason about these problems and the underlying theorem prover technology that support them are also active areas of research. The International Conference on Theorem Proving in Higher Order Logics (TPHOLs) brings together people working in these and related areas for the discussion and dissemination of new ideas in the field. TPHOLs'98 continues the conference tradition of having both a completed work and work-in-progress stream. The Papers from the first stream were formally refereed, and published as volume 1479 of LNCS. This, supplementary, proceedings records work accepted under the work-in-progress category, and is intended to document emerging trends in higher-order logic research. Papers in the work-in-progress stream are vetted for relevance and contribution before acceptance. The work-in-progress stream is regarded as an important feature of the conference as it provides a venue for the presentation of ongoing research projects, where researchers invite discussion of preliminary results. Although the TPHOLs conferences have their genesis in meetings of the users of the HOL theorem proving system, each successive year has seen a higher rate of contribution from the other groups with similar goals, particularly the user communities of Coq, Isabelle, Lambda, Lego, NuPrl, and PVS. Since 1993 the proceedings have been published by Springer as volumes in Lecture Notes in Computer Science series. Bibliographic details of these publications can be found at the back of this book; more history of TPHOLs can be found with further information about the 1998 event at http://cs.anu.edu.au/TPHOLs98/.Conference Papers: Integrating TPS with Omega By Christoph Benzmuller and Volker Sorge Some Theorem Proving Aids By Paul E. Black and Phillip J. Windley Verification of the MDG Components Library in HOL By Paul Curzon, Sofiene Tahar, and Otmane Ait Mohamed Simulating Term-Rewriting in LPF and in Display Logic By Jeremy E. Dawson A Prototype Generic Tool Supporting the Embedding of Formal Notations By Andrew M. Gravell and Chris H. Pratten Embedding a Formal Notation: Experiences of Automating the Embedding of Z in the Higher Order Logics of PVS and HOL By Andrew M. Gravell and Chris H. Pratten Building HOL90 Everywhere Easily (Well Almost) By Elsa L. Gunter Program Composition in COQ-UNITY : By Francois Marques Formally Analysed Dynamic Synthesis of Hardware By Kong Woei Susanto and Tom Melham Requirements for a Simple Proof Checker By Geoffrey Watson Integrating HOL and RAISE: a practitioner's approach By Wai Wong and Karl R. P. H. Leung Effective Support for Mutually Recursive Types By Peter V. Homeie

    Methoden und Beschreibungssprachen zur Modellierung und Verifikation vonSchaltungen und Systemen: MBMV 2015 - Tagungsband, Chemnitz, 03. - 04. März 2015

    Get PDF
    Der Workshop Methoden und Beschreibungssprachen zur Modellierung und Verifikation von Schaltungen und Systemen (MBMV 2015) findet nun schon zum 18. mal statt. Ausrichter sind in diesem Jahr die Professur Schaltkreis- und Systementwurf der Technischen Universität Chemnitz und das Steinbeis-Forschungszentrum Systementwurf und Test. Der Workshop hat es sich zum Ziel gesetzt, neueste Trends, Ergebnisse und aktuelle Probleme auf dem Gebiet der Methoden zur Modellierung und Verifikation sowie der Beschreibungssprachen digitaler, analoger und Mixed-Signal-Schaltungen zu diskutieren. Er soll somit ein Forum zum Ideenaustausch sein. Weiterhin bietet der Workshop eine Plattform für den Austausch zwischen Forschung und Industrie sowie zur Pflege bestehender und zur Knüpfung neuer Kontakte. Jungen Wissenschaftlern erlaubt er, ihre Ideen und Ansätze einem breiten Publikum aus Wissenschaft und Wirtschaft zu präsentieren und im Rahmen der Veranstaltung auch fundiert zu diskutieren. Sein langjähriges Bestehen hat ihn zu einer festen Größe in vielen Veranstaltungskalendern gemacht. Traditionell sind auch die Treffen der ITGFachgruppen an den Workshop angegliedert. In diesem Jahr nutzen zwei im Rahmen der InnoProfile-Transfer-Initiative durch das Bundesministerium für Bildung und Forschung geförderte Projekte den Workshop, um in zwei eigenen Tracks ihre Forschungsergebnisse einem breiten Publikum zu präsentieren. Vertreter der Projekte Generische Plattform für Systemzuverlässigkeit und Verifikation (GPZV) und GINKO - Generische Infrastruktur zur nahtlosen energetischen Kopplung von Elektrofahrzeugen stellen Teile ihrer gegenwärtigen Arbeiten vor. Dies bereichert denWorkshop durch zusätzliche Themenschwerpunkte und bietet eine wertvolle Ergänzung zu den Beiträgen der Autoren. [... aus dem Vorwort

    Doctor of Philosophy

    Get PDF
    dissertationOver the last decade, cyber-physical systems (CPSs) have seen significant applications in many safety-critical areas, such as autonomous automotive systems, automatic pilot avionics, wireless sensor networks, etc. A Cps uses networked embedded computers to monitor and control physical processes. The motivating example for this dissertation is the use of fault- tolerant routing protocol for a Network-on-Chip (NoC) architecture that connects electronic control units (Ecus) to regulate sensors and actuators in a vehicle. With a network allowing Ecus to communicate with each other, it is possible for them to share processing power to improve performance. In addition, networked Ecus enable flexible mapping to physical processes (e.g., sensors, actuators), which increases resilience to Ecu failures by reassigning physical processes to spare Ecus. For the on-chip routing protocol, the ability to tolerate network faults is important for hardware reconfiguration to maintain the normal operation of a system. Adding a fault-tolerance feature in a routing protocol, however, increases its design complexity, making it prone to many functional problems. Formal verification techniques are therefore needed to verify its correctness. This dissertation proposes a link-fault-tolerant, multiflit wormhole routing algorithm, and its formal modeling and verification using two different methodologies. An improvement upon the previously published fault-tolerant routing algorithm, a link-fault routing algorithm is proposed to relax the unrealistic node-fault assumptions of these algorithms, while avoiding deadlock conservatively by appropriately dropping network packets. This routing algorithm, together with its routing architecture, is then modeled in a process-algebra language LNT, and compositional verification techniques are used to verify its key functional properties. As a comparison, it is modeled using channel-level VHDL which is compiled to labeled Petri-nets (LPNs). Algorithms for a partial order reduction method on LPNs are given. An optimal result is obtained from heuristics that trace back on LPNs to find causally related enabled predecessor transitions. Key observations are made from the comparison between these two verification methodologies

    Natively probabilistic computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.Includes bibliographical references (leaves 129-135).I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify probabilistic generative models, generalize and parallelize widely used sampling algorithms like rejection sampling and Markov chain Monte Carlo, and solve difficult Bayesian inference problems. I first introduce Church, a probabilistic programming language for describing probabilistic generative processes that induce distributions, which generalizes Lisp, a language for describing deterministic procedures that induce functions. I highlight the ways randomness meshes with the reflectiveness of Lisp to support the representation of structured, uncertain knowledge, including nonparametric Bayesian models from the current literature, programs for decision making under uncertainty, and programs that learn very simple programs from data. I then introduce systematic stochastic search, a recursive algorithm for exact and approximate sampling that generalizes a popular form of backtracking search to the broader setting of stochastic simulation and recovers widely used particle filters as a special case. I use it to solve probabilistic reasoning problems from statistical physics, causal reasoning and stereo vision. Finally, I introduce stochastic digital circuits that model the probability algebra just as traditional Boolean circuits model the Boolean algebra.(cont.) I show how these circuits can be used to build massively parallel, fault-tolerant machines for sampling and allow one to efficiently run Markov chain Monte Carlo methods on models with hundreds of thousands of variables in real time. I emphasize the ways in which these ideas fit together into a coherent software and hardware stack for natively probabilistic computing, organized around distributions and samplers rather than deterministic functions. I argue that by building uncertainty and randomness into the foundations of our programming languages and computing machines, we may arrive at ones that are more powerful, flexible and efficient than deterministic designs, and are in better alignment with the needs of computational science, statistics and artificial intelligence.by Vikash Kumar Mansinghka.Ph.D

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Analog Photonics Computing for Information Processing, Inference and Optimisation

    Full text link
    This review presents an overview of the current state-of-the-art in photonics computing, which leverages photons, photons coupled with matter, and optics-related technologies for effective and efficient computational purposes. It covers the history and development of photonics computing and modern analogue computing platforms and architectures, focusing on optimization tasks and neural network implementations. The authors examine special-purpose optimizers, mathematical descriptions of photonics optimizers, and their various interconnections. Disparate applications are discussed, including direct encoding, logistics, finance, phase retrieval, machine learning, neural networks, probabilistic graphical models, and image processing, among many others. The main directions of technological advancement and associated challenges in photonics computing are explored, along with an assessment of its efficiency. Finally, the paper discusses prospects and the field of optical quantum computing, providing insights into the potential applications of this technology.Comment: Invited submission by Journal of Advanced Quantum Technologies; accepted version 5/06/202

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    CWI Self-evaluation 1999-2004

    Get PDF

    Formal Analysis of Artificial Collectives using Parametric Markov Models

    Get PDF
    There are many potential applications for the deployment of distributed systems composed of identical autonomous agents such as swarm robotic systems or wireless sensor networks, including remote monitoring, space exploration, or environmental clean up. Such systems need to be robust, and the loss of a small number of agents should not compromise the effectiveness of the system as they will often operate in hostile environments where individual members of that system may suffer failures, or communication may be hindered. To address this, these artificial systems are often designed to imitate the behaviour of self-organising systems found in nature, where simple reactive behaviours for individual members of a system can lead to complex global behaviours, and the collective remains robust to the loss of individuals. Despite much research being conducted into the development and evaluation of these systems, the industrial application of these technologies is still low. This issue could be addressed by further demonstrating that they can reliably, and predictably, achieve given objectives. Designing such systems is challenging, and often detailed simulations are developed for their analysis. Simulations give invaluable insight into the behaviour of such a system, however, there are often corner cases that might be overlooked. By developing a formal model of the system using some appropriate formalism, mathematical techniques can be applied during development to ensure that the system behaves correctly with respect to some given specification. These dynamic and inherently stochastic systems can be modelled as Markov processes; memoryless stochastic processes whose behaviour at any moment in time is determined solely by their current state. Model checking is an algorithmic technique to exhaustively check that a representation of a system as a Markov process exhibits some desirable property; furthermore, such an analysis can be extended to analyse systems whose parameters may not be known in an advance. However, the analysis of formal models of large systems is limited due to the resources that are required for their analysis: the size of the model may grow exponentially with the size of the system, and the subsequent analysis may prove to be impossible due to hardware or time constraints. This thesis investigates the suitability of parametric Markov models for the analysis of swarm robotic systems and wireless sensor networks. The analysis of such models is costly in terms of the size of the formal model representing a system, and the computation time required for its subsequent analysis. Modelling techniques and abstractions are developed for the construction of macroscopic models that abstract away from the identities of individual swarm robots or sensor nodes, and instead focus on the desirable global behaviours of such a system, resulting in smaller formal models. New techniques are then introduced to facilitate the analysis of large families of such models, where similarities between models who share some parameter values are exploited to speed up their analysis. In addition, new representations for such models are developed that allow for larger models to be analysed, and also significantly reduce the time required for that analysis

    Hardware Acceleration of Electronic Design Automation Algorithms

    Get PDF
    With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microprocessors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the acceleration of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heavily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored. The architectural and performance tradeoffs of implementing the above applications on these alternative platforms (in comparison to their implementation on a single core microprocessor) are studied. In addition, this dissertation also presents an automated approach to accelerate uniprocessor code using a graphics processing unit (GPU). The key idea is to partition the software application into kernels in an automated fashion, such that multiple instances of these kernels, when executed in parallel on the GPU, can maximally benefit from the GPU?s hardware resources. The work presented in this dissertation demonstrates that several EDA algorithms can be successfully rearchitected to maximally harness their performance on alternative platforms such as custom designed ICs, FPGAs and graphic processors, and obtain speedups upto 800X. The approaches in this dissertation collectively aim to contribute towards enabling the computer aided design (CAD) community to accelerate EDA algorithms on arbitrary hardware platforms
    corecore