64 research outputs found

    Weighted p-bits for FPGA implementation of probabilistic circuits

    Full text link
    Probabilistic spin logic (PSL) is a recently proposed computing paradigm based on unstable stochastic units called probabilistic bits (p-bits) that can be correlated to form probabilistic circuits (p-circuits). These p-circuits can be used to solve problems of optimization, inference and also to implement precise Boolean functions in an "inverted" mode, where a given Boolean circuit can operate in reverse to find the input combinations that are consistent with a given output. In this paper we present a scalable FPGA implementation of such invertible p-circuits. We implement a "weighted" p-bit that combines stochastic units with localized memory structures. We also present a generalized tile of weighted p-bits to which a large class of problems beyond invertible Boolean logic can be mapped, and how invertibility can be applied to interesting problems such as the NP-complete Subset Sum Problem by solving a small instance of this problem in hardware

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Simulation and statistical model-checking of logic-based multi-agent system models

    Get PDF
    This thesis presents SALMA (Simulation and Analysis of Logic-Based Multi- Agent Models), a new approach for simulation and statistical model checking of multi-agent system models. Statistical model checking is a relatively new branch of model-based approximative verification methods that help to overcome the well-known scalability problems of exact model checking. In contrast to existing solutions, SALMA specifies the mechanisms of the simulated system by means of logical axioms based upon the well-established situation calculus. Leveraging the resulting first-order logic structure of the system model, the simulation is coupled with a statistical model-checker that uses a first-order variant of time-bounded linear temporal logic (LTL) for describing properties. This is combined with a procedural and process-based language for describing agent behavior. Together, these parts create a very expressive framework for modeling and verification that allows direct fine-grained reasoning about the agents’ interaction with each other and with their (physical) environment. SALMA extends the classical situation calculus and linear temporal logic (LTL) with means to address the specific requirements of multi-agent simulation models. In particular, cyber-physical domains are considered where the agents interact with their physical environment. Among other things, the thesis describes a generic situation calculus axiomatization that encompasses sensing and information transfer in multi agent systems, for instance sensor measurements or inter-agent messages. The proposed model explicitly accounts for real-time constraints and stochastic effects that are inevitable in cyber-physical systems. In order to make SALMA’s statistical model checking facilities usable also for more complex problems, a mechanism for the efficient on-the-fly evaluation of first-order LTL properties was developed. In particular, the presented algorithm uses an interval-based representation of the formula evaluation state together with several other optimization techniques to avoid unnecessary computation. Altogether, the goal of this thesis was to create an approach for simulation and statistical model checking of multi-agent systems that builds upon well-proven logical and statistical foundations, but at the same time takes a pragmatic software engineering perspective that considers factors like usability, scalability, and extensibility. In fact, experience gained during several small to mid-sized experiments that are presented in this thesis suggest that the SALMA approach seems to be able to live up to these expectations.In dieser Dissertation wird SALMA (Simulation and Analysis of Logic-Based Multi-Agent Models) vorgestellt, ein im Rahmen dieser Arbeit entwickelter Ansatz für die Simulation und die statistische Modellprüfung (Model Checking) von Multiagentensystemen. Der Begriff „Statistisches Model Checking” beschreibt modellbasierte approximative Verifikationsmethoden, die insbesondere dazu eingesetzt werden können, um den unvermeidlichen Skalierbarkeitsproblemen von exakten Methoden zu entgehen. Im Gegensatz zu bisherigen AnsĂ€tzen werden in SALMA die Mechanismen des simulierten Systems mithilfe logischer Axiome beschrieben, die auf dem etablierten Situationskalkül aufbauen. Die dadurch entstehende prĂ€dikatenlogische Struktur des Systemmodells wird ausgenutzt um ein Model Checking Modul zu integrieren, das seinerseits eine prĂ€dikatenlogische Variante der linearen temporalen Logik (LTL) verwendet. In Kombination mit einer prozeduralen und prozessorientierten Sprache für die Beschreibung von Agentenverhalten entsteht eine ausdrucksstarke und flexible Plattform für die Modellierung und Verifikation von Multiagentensystemen. Sie ermöglicht eine direkte und feingranulare Beschreibung der Interaktionen sowohl zwischen Agenten als auch von Agenten mit ihrer (physischen) Umgebung. SALMA erweitert den klassischen Situationskalkül und die lineare temporale Logik (LTL) um Elemente und Konzepte, die auf die spezifischen Anforderungen bei der Simulation und Modellierung von Multiagentensystemen ausgelegt sind. Insbesondere werden cyber-physische Systeme (CPS) unterstützt, in denen Agenten mit ihrer physischen Umgebung interagieren. Unter anderem wird eine generische, auf dem Situationskalkül basierende, Axiomatisierung von Prozessen beschrieben, in denen Informationen innerhalb von Multiagentensystemen transferiert werden – beispielsweise in Form von Sensor- Messwerten oder Netzwerkpaketen. Dabei werden ausdrücklich die unvermeidbaren stochastischen Effekte und Echtzeitanforderungen in cyber-physischen Systemen berücksichtigt. Um statistisches Model Checking mit SALMA auch für komplexere Problemstellungen zu ermöglichen, wurde ein Mechanismus für die effiziente Auswertung von prĂ€dikatenlogischen LTL-Formeln entwickelt. Insbesondere beinhaltet der vorgestellte Algorithmus eine Intervall-basierte ReprĂ€sentation des Auswertungszustands, sowie einige andere OptimierungsansĂ€tze zur Vermeidung von unnötigen Berechnungsschritten. Insgesamt war es das Ziel dieser Dissertation, eine Lösung für Simulation und statistisches Model Checking zu schaffen, die einerseits auf fundierten logischen und statistischen Grundlagen aufbaut, auf der anderen Seite jedoch auch pragmatischen Gesichtspunkten wie Benutzbarkeit oder Erweiterbarkeit genügt. TatsĂ€chlich legen erste Ergebnisse und Erfahrungen aus mehreren kleinen bis mittelgroßen Experimenten nahe, dass SALMA diesen Zielen gerecht wird

    Simulation and statistical model-checking of logic-based multi-agent system models

    Get PDF
    This thesis presents SALMA (Simulation and Analysis of Logic-Based Multi- Agent Models), a new approach for simulation and statistical model checking of multi-agent system models. Statistical model checking is a relatively new branch of model-based approximative verification methods that help to overcome the well-known scalability problems of exact model checking. In contrast to existing solutions, SALMA specifies the mechanisms of the simulated system by means of logical axioms based upon the well-established situation calculus. Leveraging the resulting first-order logic structure of the system model, the simulation is coupled with a statistical model-checker that uses a first-order variant of time-bounded linear temporal logic (LTL) for describing properties. This is combined with a procedural and process-based language for describing agent behavior. Together, these parts create a very expressive framework for modeling and verification that allows direct fine-grained reasoning about the agents’ interaction with each other and with their (physical) environment. SALMA extends the classical situation calculus and linear temporal logic (LTL) with means to address the specific requirements of multi-agent simulation models. In particular, cyber-physical domains are considered where the agents interact with their physical environment. Among other things, the thesis describes a generic situation calculus axiomatization that encompasses sensing and information transfer in multi agent systems, for instance sensor measurements or inter-agent messages. The proposed model explicitly accounts for real-time constraints and stochastic effects that are inevitable in cyber-physical systems. In order to make SALMA’s statistical model checking facilities usable also for more complex problems, a mechanism for the efficient on-the-fly evaluation of first-order LTL properties was developed. In particular, the presented algorithm uses an interval-based representation of the formula evaluation state together with several other optimization techniques to avoid unnecessary computation. Altogether, the goal of this thesis was to create an approach for simulation and statistical model checking of multi-agent systems that builds upon well-proven logical and statistical foundations, but at the same time takes a pragmatic software engineering perspective that considers factors like usability, scalability, and extensibility. In fact, experience gained during several small to mid-sized experiments that are presented in this thesis suggest that the SALMA approach seems to be able to live up to these expectations.In dieser Dissertation wird SALMA (Simulation and Analysis of Logic-Based Multi-Agent Models) vorgestellt, ein im Rahmen dieser Arbeit entwickelter Ansatz für die Simulation und die statistische Modellprüfung (Model Checking) von Multiagentensystemen. Der Begriff „Statistisches Model Checking” beschreibt modellbasierte approximative Verifikationsmethoden, die insbesondere dazu eingesetzt werden können, um den unvermeidlichen Skalierbarkeitsproblemen von exakten Methoden zu entgehen. Im Gegensatz zu bisherigen AnsĂ€tzen werden in SALMA die Mechanismen des simulierten Systems mithilfe logischer Axiome beschrieben, die auf dem etablierten Situationskalkül aufbauen. Die dadurch entstehende prĂ€dikatenlogische Struktur des Systemmodells wird ausgenutzt um ein Model Checking Modul zu integrieren, das seinerseits eine prĂ€dikatenlogische Variante der linearen temporalen Logik (LTL) verwendet. In Kombination mit einer prozeduralen und prozessorientierten Sprache für die Beschreibung von Agentenverhalten entsteht eine ausdrucksstarke und flexible Plattform für die Modellierung und Verifikation von Multiagentensystemen. Sie ermöglicht eine direkte und feingranulare Beschreibung der Interaktionen sowohl zwischen Agenten als auch von Agenten mit ihrer (physischen) Umgebung. SALMA erweitert den klassischen Situationskalkül und die lineare temporale Logik (LTL) um Elemente und Konzepte, die auf die spezifischen Anforderungen bei der Simulation und Modellierung von Multiagentensystemen ausgelegt sind. Insbesondere werden cyber-physische Systeme (CPS) unterstützt, in denen Agenten mit ihrer physischen Umgebung interagieren. Unter anderem wird eine generische, auf dem Situationskalkül basierende, Axiomatisierung von Prozessen beschrieben, in denen Informationen innerhalb von Multiagentensystemen transferiert werden – beispielsweise in Form von Sensor- Messwerten oder Netzwerkpaketen. Dabei werden ausdrücklich die unvermeidbaren stochastischen Effekte und Echtzeitanforderungen in cyber-physischen Systemen berücksichtigt. Um statistisches Model Checking mit SALMA auch für komplexere Problemstellungen zu ermöglichen, wurde ein Mechanismus für die effiziente Auswertung von prĂ€dikatenlogischen LTL-Formeln entwickelt. Insbesondere beinhaltet der vorgestellte Algorithmus eine Intervall-basierte ReprĂ€sentation des Auswertungszustands, sowie einige andere OptimierungsansĂ€tze zur Vermeidung von unnötigen Berechnungsschritten. Insgesamt war es das Ziel dieser Dissertation, eine Lösung für Simulation und statistisches Model Checking zu schaffen, die einerseits auf fundierten logischen und statistischen Grundlagen aufbaut, auf der anderen Seite jedoch auch pragmatischen Gesichtspunkten wie Benutzbarkeit oder Erweiterbarkeit genügt. TatsĂ€chlich legen erste Ergebnisse und Erfahrungen aus mehreren kleinen bis mittelgroßen Experimenten nahe, dass SALMA diesen Zielen gerecht wird

    Interpretation of Natural-language Robot Instructions: Probabilistic Knowledge Representation, Learning, and Reasoning

    Get PDF
    A robot that can be simply told in natural language what to do -- this has been one of the ultimate long-standing goals in both Artificial Intelligence and Robotics research. In near-future applications, robotic assistants and companions will have to understand and perform commands such as set the table for dinner'', make pancakes for breakfast'', or cut the pizza into 8 pieces.'' Although such instructions are only vaguely formulated, complex sequences of sophisticated and accurate manipulation activities need to be carried out in order to accomplish the respective tasks. The acquisition of knowledge about how to perform these activities from huge collections of natural-language instructions from the Internet has garnered a lot of attention within the last decade. However, natural language is typically massively unspecific, incomplete, ambiguous and vague and thus requires powerful means for interpretation. This work presents PRAC -- Probabilistic Action Cores -- an interpreter for natural-language instructions which is able to resolve vagueness and ambiguity in natural language and infer missing information pieces that are required to render an instruction executable by a robot. To this end, PRAC formulates the problem of instruction interpretation as a reasoning problem in first-order probabilistic knowledge bases. In particular, the system uses Markov logic networks as a carrier formalism for encoding uncertain knowledge. A novel framework for reasoning about unmodeled symbolic concepts is introduced, which incorporates ontological knowledge from taxonomies and exploits semantically similar relational structures in a domain of discourse. The resulting reasoning framework thus enables more compact representations of knowledge and exhibits strong generalization performance when being learnt from very sparse data. Furthermore, a novel approach for completing directives is presented, which applies semantic analogical reasoning to transfer knowledge collected from thousands of natural-language instruction sheets to new situations. In addition, a cohesive processing pipeline is described that transforms vague and incomplete task formulations into sequences of formally specified robot plans. The system is connected to a plan executive that is able to execute the computed plans in a simulator. Experiments conducted in a publicly accessible, browser-based web interface showcase that PRAC is capable of closing the loop from natural-language instructions to their execution by a robot

    Roadmap on all-optical processing

    Get PDF
    The ability to process optical signals without passing into the electrical domain has always attracted the attention of the research community. Processing photons by photons unfolds new scenarios, in principle allowing for unseen signal processing and computing capabilities. Optical computation can be seen as a large scientific field in which researchers operate, trying to find solutions to their specific needs by different approaches; although the challenges can be substantially different, they are typically addressed using knowledge and technological platforms that are shared across the whole field. This significant know-how can also benefit other scientific communities, providing lateral solutions to their problems, as well as leading to novel applications. The aim of this Roadmap is to provide a broad view of the state-of-the-art in this lively scientific research field and to discuss the advances required to tackle emerging challenges, thanks to contributions authored by experts affiliated to both academic institutions and high-tech industries. The Roadmap is organized so as to put side by side contributions on different aspects of optical processing, aiming to enhance the cross-contamination of ideas between scientists working in three different fields of photonics: optical gates and logical units, high bit-rate signal processing and optical quantum computing. The ultimate intent of this paper is to provide guidance for young scientists as well as providing research-funding institutions and stake holders with a comprehensive overview of perspectives and opportunities offered by this research field

    Multicoloured Random Graphs: Constructions and Symmetry

    Full text link
    This is a research monograph on constructions of and group actions on countable homogeneous graphs, concentrating particularly on the simple random graph and its edge-coloured variants. We study various aspects of the graphs, but the emphasis is on understanding those groups that are supported by these graphs together with links with other structures such as lattices, topologies and filters, rings and algebras, metric spaces, sets and models, Moufang loops and monoids. The large amount of background material included serves as an introduction to the theories that are used to produce the new results. The large number of references should help in making this a resource for anyone interested in beginning research in this or allied fields.Comment: Index added in v2. This is the first of 3 documents; the other 2 will appear in physic

    Systematic Model-based Design Assurance and Property-based Fault Injection for Safety Critical Digital Systems

    Get PDF
    With advances in sensing, wireless communications, computing, control, and automation technologies, we are witnessing the rapid uptake of Cyber-Physical Systems across many applications including connected vehicles, healthcare, energy, manufacturing, smart homes etc. Many of these applications are safety-critical in nature and they depend on the correct and safe execution of software and hardware that are intrinsically subject to faults. These faults can be design faults (Software Faults, Specification faults, etc.) or physically occurring faults (hardware failures, Single-event-upsets, etc.). Both types of faults must be addressed during the design and development of these critical systems. Several safety-critical industries have widely adopted Model-Based Engineering paradigms to manage the design assurance processes of these complex CPSs. This thesis studies the application of IEC 61508 compliant model-based design assurance methodology on a representative safety-critical digital architecture targeted for the Nuclear power generation facilities. The study presents detailed experiences and results to demonstrate the benefits of Model testing in finding design flaws and its relevance to subsequent verification steps in the workflow. Additionally, to study the impact of physical faults on the digital architecture we develop a novel property-based fault injection method that overcomes few deficiencies of traditional fault injection methods. The model-based fault injection approach presented here guarantees high efficiency and near-exhaustive input/state/fault space coverage, by utilizing formal model checking principles to identify fault activation conditions and prove the fault tolerance features. The fault injection framework facilitates automated integration of fault saboteurs throughout the model to enable exhaustive fault location coverage in the model

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic
    • 

    corecore