15 research outputs found

    SMTBDD: New Form of BDD for Logic Synthesis

    Get PDF
    The main purpose of the paper is to suggest a new form of BDD – SMTBDD diagram, methods of obtaining, and its basic features. The idea of using SMTBDD diagram in the process of logic synthesis dedicated to FPGA structures is presented. The creation of SMTBDD diagrams is the result of cutting BDD diagram which is the effect of multiple decomposition. The essence of a proposed decomposition method rests on the way of determining the number of necessary ‘g’ bounded functions on the basis of the content of a root table connected with an appropriate SMTBDD diagram. The article presents the methods of searching non-disjoint decomposition using SMTBDD diagrams. Besides, it analyzes the techniques of choosing cutting levels as far as effective technology mapping is concerned. The paper also discusses the results of the experiments which confirm the efficiency of the analyzed decomposition methods

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    äșŒćˆ†æ±ș漚曳べç©șé–“èĄŒć‹•çȒćșŠă«ćŸșă„ăăƒ­ăƒŒă‚«ăƒ«ăƒ€ă‚€ăƒŠăƒŸăƒƒă‚Żăƒžăƒƒăƒ—ă‚’ćźŸèŁ…ćŻèƒœă«ă™ă‚‹æ‰‹æł•ă«é–ąă™ă‚‹ç ”ç©¶

    Get PDF
    Autonomous vehicles (AVs) have been increasing rapidly on the road in recent years. However, the safety of AVs is of significant concern, which we must ensure. AVs use sensor information to achieve autonomy, but sensors such as cameras and lidar have limitations, and vehicles cannot rely on them entirely for safe navigation. To assist AVs with static information, high-definition maps (HD maps) can facilitate the complex static details of the surrounding for safe autonomy. However, we can model complex static information using HD maps for navigation; detecting and maintaining the traffic participant’s dynamic information using sensors of the ego vehicle alone is still a significant concern for safe navigation. In such a situation of sensing limitations, Cooperative Intelligent Transport Systems (C-ITS) is one approach to facilitate vehicle navigation through sharing information between the traffic participants. The C-ITS approach has various Intelligent transportation system (ITS) station units, namely Personal, Vehicle, Road-side and Central ITS station units. A Local Dynamic Map (LDM) is a critical component in any ITS station’s facilities layer. LDM is one way to maintain static and dynamic information of the traffic participants in a consistent geometrical way. It is a necessary facility in C-ITS to share sensor information between participating traffic agents. Moreover, it maintains information about the objects that are either part of the traffic or influenced by it. The International Organization for Standardization (ISO) and European Telecommunications Standards Institute (ETSI) have also made standardization efforts. Since its inception in the SAFESPOT project, implementations of LDM have been mostly four-layer data organizations. Where Layer 1 and Layer 2 maintain static information and transient static information. Then, Layer 3 and Layer 4 contain transient dynamic and highly dynamic data. Depending upon the requirement, the LDM community realized memory-based or database-based LDM. We utilized the decision diagram to enhance the safety aspect of the traffic participants in the memory/ database-based LDM setup. We utilized Shared Binary Decision Diagram (SBDD) and Geohash granular properties to detect the near-miss situation, i.e. when vehicles come very close. However, besides DynaMap, there is also a common understanding since the SAFESPOT project introduced LDM to use the database and supported query language to retrieve data from the LDM. Hence, most implementations use different databases and query languages to execute it. Although, the LDM community has explored LDM depending on the database variants. Nevertheless, remarkably less emphasis has been given to the type of data stored in the LDM. This thesis attempted to fill this gap in the LDM to enhance the moving vehicle’s safety aspect. We proposed a novel method of data representation for vehicle future geographical occupancy information using a binary decision diagram (BDD). We show that sharing BDD-based information is consistent with the C-ITS nature of the data sharing since the algebraic operation between the exchanged BDDs can confirm the possibility of future interaction. We calculated potential future occupancy using Kamm’s circle, shown in the ROS-based simulator and modified the mid-point circle generation algorithm to find the BDD representing the set of Geohash enclosing the Kamm’s circle. We also reported data insertion and collision avoidance check time of the linked list-based BDD on PostgreSQL database-based LDM.äčć·žć·„æ„­ć€§ć­ŠćšćŁ«ć­Šäœè«–æ–‡ ć­Šäœèš˜ç•Șć·ïŒšç”Ÿć·„ćšç”Č珏449ć· ć­ŠäœæŽˆäžŽćčŽæœˆæ—„ïŒšä»€ć’Œ4ćčŽ9月26æ—„1 Introduction|2 Literature Review|3 Methodology|4 Results|5 Discussion|6 Summaryäčć·žć·„æ„­ć€§ć­Šä»€ć’Œ4ćčŽ

    Multi-core Decision Diagrams

    Get PDF
    Decision diagrams are fundamental data structures that revolutionized fields such as model checking, automated reasoning and decision processes. As performance gains in the current era mostly come from parallel processing, an ongoing challenge is to develop data structures and algorithms for modern multicore architectures. This chapter describes the parallelization of decision diagram operations as implemented in the parallel decision diagram package Sylvan, which allows sequential algorithms that use decision diagrams to exploit the power of multi-core machines

    BDD Algortihms and Cache Misses

    Get PDF
    Within the last few years, CPU speed has greatly overtaken memory speed. For this reason, implementation of symbolic algorithms - with their extensive use of pointers and hashing - must be reexamined. In this paper, we introduce the concept of cache miss complexityas an analytical tool for evaluating algorithms depending on pointer chasing. Such algorithms are typical of symbolic computation found in verification. We show how this measure suggests new data structures and algorithmsfor multi-terminal BDDs. Our ideas have been implemented ina BDD package, which is used in a decision procedure for the Monadic Second-order Logic on strings.Experimental results show that on large examples involving e.g the verification of concurrent programs, our implementation runs 4 to 5 times faster than a widely used BDD implementation.We believe that the method of cache miss complexity is of general interest to any implementor of symbolic algorithms used in verification

    On graph algorithms for large-scale graphs

    Get PDF
    Die Anforderungen an Algorithmen hat sich in den letzten Jahren grundlegend geĂ€ndert. Die DatengrĂ¶ĂŸe der zu verarbeitenden Daten wĂ€chst schneller als die zur VerfĂŒgung stehende Rechengeschwindigkeit. Daher sind neue Algorithmen auf sehr großen Graphen wie z.B. soziale Netzwerke, Computernetzwerke oder ZustandsĂŒbergangsgraphen entwickelt worden, um das Problem der immer grĂ¶ĂŸer werdenden Daten zu bewĂ€ltigen. Diese Arbeit beschĂ€ftigt sich mit zwei Herangehensweisen fĂŒr dieses Problem. Implizite Algorithmen benutzten eine verlustfreie Kompression der Daten, um die DatengrĂ¶ĂŸe zu reduzieren, und arbeiten direkt mit den komprimierten Daten, um Optimierungsprobleme zu lösen. Graphen werden hier anhand der charakteristischen Funktion der Kantenmenge dargestellt, welche mit Hilfe von Ordered Binary Decision Diagrams (OBDDs) – eine bekannte Datenstruktur fĂŒr Boolesche Funktionen - reprĂ€sentiert werden können. Wir entwickeln in dieser Arbeit neue Techniken, um die OBDD-GrĂ¶ĂŸe von Graphen zu bestimmen, und wenden diese Technik fĂŒr mehrere Klassen von Graphen an und erhalten damit (fast) optimale Schranken fĂŒr die OBDD-GrĂ¶ĂŸen. Kleine Eingabe-OBDDs sind essenziell fĂŒr eine schnelle Verarbeitung, aber wir brauchen auch Algorithmen, die große Zwischenergebnisse wĂ€hrend der AusfĂŒhrung vermeiden. HierfĂŒr entwickeln wir Algorithmen fĂŒr bestimme Graphklassen, die die Kodierung der Knoten ausnutzt, die wir fĂŒr die Resultate der OBDD-GrĂ¶ĂŸe benutzt haben. ZusĂ€tzlich legen wir die Grundlage fĂŒr die Betrachtung von randomisierten OBDD-basierten Algorithmen, indem wir untersuchen, welche Art von Zufall wir hier verwenden und wie wir damit Algorithmen entwerfen können. Im Zuge dessen geben wir zwei randomisierte Algorithmen an, die ihre entsprechenden deterministischen Algorithmen in einer experimentellen Auswertung ĂŒberlegen sind. Datenstromalgoritmen sind eine weitere Möglichkeit fĂŒr die Bearbeitung von großen Graphen. In diesem Modell wird der Graph anhand eines Datenstroms von KanteneinfĂŒgungen reprĂ€sentiert und den Algorithmen steht nur eine begrenzte Menge von Speicher zur VerfĂŒgung. Lösungen fĂŒr Graphoptimierungsprobleme benötigen hĂ€ufig eine lineare GrĂ¶ĂŸe bzgl. der Anzahl der Knoten, was eine triviale untere Schranke fĂŒr die Streamingalgorithmen fĂŒr diese Probleme impliziert. Die Berechnung eines Matching ist so ein Beispiel, was aber in letzter Zeit viel Aufmerksamkeit in der Streaming-Community auf sich gezogen hat. Ein Matching ist eine Menge von Kanten, so dass keine zwei Kanten einen gemeinsamen Knoten besitzen. Wenn wir nur an der GrĂ¶ĂŸe oder dem Gewicht (im Falle von gewichteten Graphen) eines Matching interessiert sind, ist es mögliche diese lineare untere Schranke zu durchbrechen. Wir konzentrieren uns in dieser Arbeit auf dynamische Datenströme, wo auch Kanten gelöscht werden können. Wir reduzieren das Problem, einen SchĂ€tzer fĂŒr ein gewichtsoptimales Matching zu finden, auf das Problem, die GrĂ¶ĂŸe von Matchings zu approximieren, wobei wir einen kleinen Verlust bzgl. der ApproximationsgĂŒte in Kauf nehmen mĂŒssen. Außerdem prĂ€sentieren wir den ersten dynamischen Streamingalgorithmus, der die GrĂ¶ĂŸe von Matchings in lokal spĂ€rlichen Graphen approximiert. FĂŒr kleine Approximationsfaktoren zeigen wir eine untere Schranke fĂŒr den Platzbedarf von Streamingalgorithmen, die die MatchinggrĂ¶ĂŸe approximieren.The algorithmic challenges have changed in the last decade due to the rapid growth of the data set sizes that need to be processed. New types of algorithms on large graphs like social graphs, computer networks, or state transition graphs have emerged to overcome the problem of ever-increasing data sets. In this thesis, we investigate two approaches to this problem. Implicit algorithms utilize lossless compression of data to reduce the size and to directly work on this compressed representation to solve optimization problems. In the case of graphs we are dealing with the characteristic function of the edge set which can be represented by Ordered Binary Decision Diagrams (OBDDs), a well-known data structure for Boolean functions. We develop a new technique to prove upper and lower bounds on the size of OBDDs representing graphs and apply this technique to several graph classes to obtain (almost) optimal bounds. A small input OBDD size is absolutely essential for dealing with large graphs but we also need algorithms that avoid large intermediate results during the computation. For this purpose, we design algorithms for a specific graph class that exploit the encoding of the nodes that we use for the results on the OBDD sizes. In addition, we lay the foundation on the theory of randomization in OBDD-based algorithms by investigating what kind of randomness is feasible and how to design algorithms with it. As a result, we present two randomized algorithms that outperform known deterministic algorithms on many input instances. Streaming algorithms are another approach for dealing with large graphs. In this model, the graph is presented one-by-one in a stream of edge insertions or deletions and the algorithms are permitted to use only a limited amount of memory. Often, the solution to an optimization problem on graphs can require up to a linear amount of space with respect to the number of nodes, resulting in a trivial lower bound for the space requirement of any streaming algorithm for those problems. Computing a matching, i. e., a subset of edges where no two edges are incident to a common node, is an example which has recently attracted a lot of attention in the streaming setting. If we are interested in the size (or weight in case of weighted graphs) of a matching, it is possible to break this linear bound. We focus on so-called dynamic graph streams where edges can be inserted and deleted and reduce the problem of estimating the weight of a matching to the problem of estimating the size of a maximum matching with a small loss in the approximation factor. In addition, we present the first dynamic graph stream algorithm for estimating the size of a matching in graphs which are locally sparse. On the negative side, we prove a space lower bound of streaming algorithms that estimate the size of a maximum matching with a small approximation factor

    JINC - A Multi-Threaded Library for Higher-Order Weighted Decision Diagram Manipulation

    Get PDF
    Ordered Binary Decision Diagrams (OBDDs) have been proven to be an efficient data structure for symbolic algorithms. The efficiency of the symbolic methods de- pends on the underlying OBDD library. Available OBDD libraries are based on the standard concepts and so far only differ in implementation details. This thesis introduces new techniques to increase run-time and space-efficiency of an OBDD library. This thesis introduces the framework of Higher-Order Weighted Decision Diagrams (HOWDDs) to combine the similarities of different OBDD variants. This frame- work pioneers the basis for the new variant Toggling Algebraic Decision Diagrams (TADDs) which has been shown to be a space-efficient HOWDD variant for sym- bolic matrix representation. The concept of HOWDDs has been use to implement the OBDD library JINC. This thesis also analyzes the usage of multi-threading techniques to speed-up OBDD manipulations. A new reordering framework ap- plies the advantages of multi-threading techniques to reordering algorithms. This approach uses an abstraction layer so that the original reordering algorithms are not touched. The challenge that arise from a straight forward algorithm is that the computed-tables and the garbage collection are not as efficient as in a single- threaded environment. We resolve this problem by developing a new multi-operand APPLY algorithm that eliminates the creation of temporary nodes which could occur during computation and thus reduces the need for caching or garbage collection. The HOWDD framework leads to an efficient library design which has been shown to be more efficient than the established OBDD library CUDD. The HOWDD instance TADD reduces the needed number of nodes by factor two compared to ordinary ADDs. The new multi-threading approaches are more efficient than single-threading approaches by several factors. In the case of the new reordering framework the speed- up almost equals the theoretical optimal speed-up. The novel multi-operand APPLY algorithm reduces the memory usage for the n-queens problem by factor 50 which enables the calculation of bigger problem instances compared to the traditional APPLY approach. The new approaches improve the performance and reduce the memory footprint. This leads to the conclusion that applications should be reviewed whether they could benefit from the new multi-threading multi-operand approaches introduced and discussed in this thesis

    Extending the Finite Domain Solver of GNU Prolog

    No full text
    International audienceThis paper describes three significant extensions for the Finite Domain solver of GNU Prolog. First, the solver now supports negative integers. Second, the solver detects and prevents integer overflows from occurring. Third, the internal representation of sparse domains has been redesigned to overcome its current limitations. The preliminary performance evaluation shows a limited slowdown factor with respect to the initial solver. This factor is widely counterbalanced by the new possibilities and the robustness of the solver. Furthermore these results are preliminary and we propose some directions to limit this overhead

    Report / Institute fĂŒr Physik

    Get PDF
    The 2016 Report of the Physics Institutes of the UniversitÀt Leipzig presents a hopefully interesting overview of our research activities in the past year. It is also testimony of our scientific interaction with colleagues and partners worldwide. We are grateful to our guests for enriching our academic year with their contributions in the colloquium and within our work groups
    corecore