483 research outputs found

    A Fast Compiler for NetKAT

    Full text link
    High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle "local" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer "global" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement "global" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size

    AbsSynthe: abstract synthesis from succinct safety specifications

    Full text link
    In this paper, we describe a synthesis algorithm for safety specifications described as circuits. Our algorithm is based on fixpoint computations, abstraction and refinement, it uses binary decision diagrams as symbolic data structure. We evaluate our tool on the benchmarks provided by the organizers of the synthesis competition organized within the SYNT'14 workshop.Comment: In Proceedings SYNT 2014, arXiv:1407.493

    Deriving Petri nets from finite transition systems

    Get PDF
    This paper presents a novel method to derive a Petri net from any specification model that can be mapped into a state-based representation with arcs labeled with symbols from an alphabet of events (a Transition System, TS). The method is based on the theory of regions for Elementary Transition Systems (ETS). Previous work has shown that, for any ETS, there exists a Petri Net with minimum transition count (one transition for each label) with a reachability graph isomorphic to the original Transition System. Our method extends and implements that theory by using the following three mechanisms that provide a framework for synthesis of safe Petri nets from arbitrary TSs. First, the requirement of isomorphism is relaxed to bisimulation of TSs, thus extending the class of synthesizable TSs to a new class called Excitation-Closed Transition Systems (ECTS). Second, for the first time, we propose a method of PN synthesis for an arbitrary TS based on mapping a TS event into a set of transition labels in a PN. Third, the notion of irredundant region set is exploited, to minimize the number of places in the net without affecting its behavior. The synthesis method can derive different classes of place-irredundant Petri Nets (e.g., pure, free choice, unique choice) from the same TS, depending on the constraints imposed on the synthesis algorithm. This method has been implemented and applied in different frameworks. The results obtained from the experiments have demonstrated the wide applicability of the method.Peer ReviewedPostprint (published version

    Pseudo-Boolean Constraint Encodings for Conjunctive Normal Form and their Applications

    Get PDF
    In contrast to a single clause a pseudo-Boolean (PB) constraint is much more expressive and hence it is easier to define problems with the help of PB constraints. But while PB constraints provide us with a high-level problem description, it has been shown that solving PB constraints can be done faster with the help of a SAT solver. To apply such a solver to a PB constraint we have to encode it with clauses into conjunctive normal form (CNF). While we can find a basic encoding into CNF which is equivalent to a given PB constraint, the solving time of a SAT solver significantly depends on different properties of an encoding, e.g. the number of clauses or if generalized arc consistency (GAC) is maintained during the search for a solution. There are various PB encodings that try to optimize or balance these properties. This thesis is about such encodings. For a better understanding of the research field an overview about the state-of-the art encodings is given. The focus of the overview is a simple but complete description of each encoding, such that any reader could use, implement and extent them in his own work. In addition two novel encodings are presented: The Sequential Weight Counter (SWC) encoding and the Binary Merger Encoding. While the SWC encoding provides a very simple structure – it is listed in four lines – empirical evaluation showed its practical usefulness in various applications. The Binary Merger encoding reduces the number of clauses a PB encoding needs while having the important GAC property. To the best of our knowledge currently no other encoding has a lower upper bound for the number of clauses produced by a PB encoding with this property. This is an important improvement of the state-of-the art, since both GAC and a low number of clauses are vital for an improved solving time of the SAT solver. The thesis also contributes to the development of new applications for PB constraint encodings. The programming library PBLib provides researchers with an open source implementation of almost all PB encodings – including the encodings for the special cases at-most-one and cardinality constraints. The PBLib is also the foundation of the presented weighted MaxSAT solver optimax, the PBO solver pbsolver and the WBO, PBO and weighted MaxSAT solver npSolver

    A Model-based Approach for Designing Cyber-Physical Production Systems

    Get PDF
    The most recent development trend related to manufacturing is called "Industry 4.0". It proposes to transition from "blind" mechatronics systems to Cyber-Physical Production Systems (CPPSs). Such systems are capable of communicating with each other, acquiring and transmitting real-time production data. Their management and control require a structured software architecture, which is tipically referred to as the "Automation Pyramid". The design of both the software architecture and the components (i.e., the CPPSs) is a complex task, where the complexity is induced by the heterogeneity of the required functionalities. In such a context, the target of this thesis is to propose a model-based framework for the analysis and the design of production lines, compliant with the Industry 4.0 paradigm. In particular, this framework exploits the Systems Modeling Language (SysML) as a unified representation for the different viewpoints of a manufacturing system. At the components level, the structural and behavioral diagrams provided by SysML are used to produce a set of logical propositions about the system and components under design. Such an approach is specifically tailored towards constructing Assume-Guarantee contracts. By exploiting reactive synthesis techniques, contracts are used to prototype portions of components' behaviors and to verify whether implementations are consistent with the requirements. At the software level, the framework proposes a particular architecture based on the concept of "service". Such an architecture facilitates the reconfiguration of components and integrates an advanced scheduling technique, taking advantage of the production recipe SysML model. The proposed framework has been built coupled with the construction of the ICE Laboratory, a research facility consisting of a full-fledged production line. Such an approach has been adopted to construct models of the laboratory, to virtual prototype parts of the system and to manage the physical system through the proposed software architecture

    Planning and Optimization During the Life-Cycle of Service Level Agreements for Cloud Computing

    Get PDF
    Ein Service Level Agreement (SLA) ist ein elektronischer Vertrag zwischen dem Kunden und dem Anbieter eines Services. Die beteiligten Partner kl aren ihre Erwartungen und Verp ichtungen in Bezug auf den Dienst und dessen Qualit at. SLAs werden bereits f ur die Beschreibung von Cloud-Computing-Diensten eingesetzt. Der Diensteanbieter stellt sicher, dass die Dienstqualit at erf ullt wird und mit den Anforderungen des Kunden bis zum Ende der vereinbarten Laufzeit ubereinstimmt. Die Durchf uhrung der SLAs erfordert einen erheblichen Aufwand, um Autonomie, Wirtschaftlichkeit und E zienz zu erreichen. Der gegenw artige Stand der Technik im SLA-Management begegnet Herausforderungen wie SLA-Darstellung f ur Cloud- Dienste, gesch aftsbezogene SLA-Optimierungen, Dienste-Outsourcing und Ressourcenmanagement. Diese Gebiete scha en zentrale und aktuelle Forschungsthemen. Das Management von SLAs in unterschiedlichen Phasen w ahrend ihrer Laufzeit erfordert eine daf ur entwickelte Methodik. Dadurch wird die Realisierung von Cloud SLAManagement vereinfacht. Ich pr asentiere ein breit gef achertes Modell im SLA-Laufzeitmanagement, das die genannten Herausforderungen adressiert. Diese Herangehensweise erm oglicht eine automatische Dienstemodellierung, sowie Aushandlung, Bereitstellung und Monitoring von SLAs. W ahrend der Erstellungsphase skizziere ich, wie die Modellierungsstrukturen verbessert und vereinfacht werden k onnen. Ein weiteres Ziel von meinem Ansatz ist die Minimierung von Implementierungs- und Outsourcingkosten zugunsten von Wettbewerbsf ahigkeit. In der SLA-Monitoringphase entwickle ich Strategien f ur die Auswahl und Zuweisung von virtuellen Cloud Ressourcen in Migrationsphasen. Anschlie end pr ufe ich mittels Monitoring eine gr o ere Zusammenstellung von SLAs, ob die vereinbarten Fehlertoleranzen eingehalten werden. Die vorliegende Arbeit leistet einen Beitrag zu einem Entwurf der GWDG und deren wissenschaftlichen Communities. Die Forschung, die zu dieser Doktorarbeit gef uhrt hat, wurde als Teil von dem SLA@SOI EU/FP7 integriertem Projekt durchgef uhrt (contract No. 216556)

    Logics for digital circuit verification : theory, algorithms, and applications

    Get PDF

    Automated Synthesis of Unconventional Computing Systems

    Get PDF
    Despite decades of advancements, modern computing systems which are based on the von Neumann architecture still carry its shortcomings. Moore\u27s law, which had substantially masked the effects of the inherent memory-processor bottleneck of the von Neumann architecture, has slowed down due to transistor dimensions nearing atomic sizes. On the other hand, modern computational requirements, driven by machine learning, pattern recognition, artificial intelligence, data mining, and IoT, are growing at the fastest pace ever. By their inherent nature, these applications are particularly affected by communication-bottlenecks, because processing them requires a large number of simple operations involving data retrieval and storage. The need to address the problems associated with conventional computing systems at the fundamental level has given rise to several unconventional computing paradigms. In this dissertation, we have made advancements for automated syntheses of two types of unconventional computing paradigms: in-memory computing and stochastic computing. In-memory computing circumvents the problem of limited communication bandwidth by unifying processing and storage at the same physical locations. The advent of nanoelectronic devices in the last decade has made in-memory computing an energy-, area-, and cost-effective alternative to conventional computing. We have used Binary Decision Diagrams (BDDs) for in-memory computing on memristor crossbars. Specifically, we have used Free-BDDs, a special class of binary decision diagrams, for synthesizing crossbars for flow-based in-memory computing. Stochastic computing is a re-emerging discipline with several times smaller area/power requirements as compared to conventional computing systems. It is especially suited for fault-tolerant applications like image processing, artificial intelligence, pattern recognition, etc. We have proposed a decision procedures-based iterative algorithm to synthesize Linear Finite State Machines (LFSM) for stochastically computing non-linear functions such as polynomials, exponentials, and hyperbolic functions
    • …
    corecore