212 research outputs found

    Advances in Functional Decomposition: Theory and Applications

    Get PDF
    Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing. This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient. The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering. The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time. The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function. The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches. Finally we present two publications that resulted from the many detours we have taken along the winding path of our research

    High-level synthesis of fine-grained weakly consistent C concurrency

    Get PDF
    High-level synthesis (HLS) is the process of automatically compiling high-level programs into a netlist (collection of gates). Given an input program, HLS tools exploit its inherent parallelism and pipelining opportunities to generate efficient customised hardware. C-based programs are the most popular input for HLS tools, but these tools historically only synthesise sequential C programs. As the appeal for software concurrency rises, HLS tools are beginning to synthesise concurrent C programs, such as C/C++ pthreads and OpenCL. Although supporting software concurrency leads to better hardware parallelism, shared memory synchronisation is typically serialised to ensure correct memory behaviour, via locks. Locks are safety resources that ensure exclusive access of shared memory, eliminating data races and providing synchronisation guarantees for programmers.  As an alternative to lock-based synchronisation, the C memory model also defines the possibility of lock-free synchronisation via fine-grained atomic operations (`atomics'). However, most HLS tools either do not support atomics at all or implement atomics using locks. Instead, we treat the synthesis of atomics as a scheduling problem. We show that we can augment the intra-thread memory constraints during memory scheduling of concurrent programs to support atomics. On average, hardware generated by our method is 7.5x faster than the state-of-the-art, for our set of experiments. Our method of synthesising atomics enables several unique possibilities. Chiefly, we are capable of supporting weakly consistent (`weak') atomics, which necessitate fewer ordering constraints compared to sequentially consistent (SC) atomics. However, implementing weak atomics is complex and error-prone and hence we formally verify our methods via automated model checking to ensure our generated hardware is correct. Furthermore, since the C memory model defines memory behaviour globally, we can globally analyse the entire program to generate its memory constraints. Additionally, we can also support loop pipelining by extending our methods to generate inter-iteration memory constraints. On average, weak atomics, global analysis and loop pipelining improve performance by 1.6x, 3.4x and 1.4x respectively, for our set of experiments. Finally, we present a case study of a real-world example via an HLS-based Google PageRank algorithm, whose performance improves by 4.4x via lock-free streaming and work-stealing.Open Acces

    Timing Aware Partitioning for Multi-FPGA based Logic Simulation using Top-down Selective Flattening

    Get PDF
    In order to accelerate logic simulation, it is highly beneficial to simulate the circuit design on FPGA hardware. However, limited hardware resources on FPGAs prevent large designs from being implemented on a single FPGA. Hence there is a need to partition the design and simulate it on a multi-FPGA platform. In contrast to existing FPGA-based post-synthesis partitioning approaches which first completely flatten the circuit and then possibly perform bottom-up clustering, we perform a selective top-down flattening and thereby avoid the potential netlist blowup. This also allows us to preserve the design hierarchy to guide the partitioning and to make subsequent debugging easier. Our approach analyzes the hierarchical design and selectively flattens instances using two metrics based on slack. The resulting partially flattened netlist is converted to a hypergraph, partitioned using a public domain partitioner (hMetis), and reconverted back to a plurality of FPGA netlists, one for each FPGA of the FPGA-based accelerated logic simulation platform. We compare our approach with a partitioning approach that operates on a completely flattened netlist. Static timing analysis was performed for both approaches, and over 15 examples from the OpenCores project, our approach yields a 52% logic simulation speedup and about 0.74x runtime for the entire flow, compared to the completely flat approach. The entire tool chain of our approach is automated in an end-to-end flow from hierarchy extraction, selective flattening, partitioning, and netlist reconstruction. Compared to an existing method which also performs slack-based partitioning of a hierarchical netlist, we obtain a 35% simulation speedup

    SAT-based optimal hypergraph partitioning with replication

    Get PDF
    We propose a methodology for optimal k-way partitioning with replication of directed hypergraphs via Boolean satisfiability. We begin by leveraging the power of existing and emerging SAT solvers to attack traditional logic bipartitioning and show good scaling behavior. We continue to present the first optimal partitioning results that admit generation and assignment of replicated nodes concurrently. Our framework is general enough that we also give the first published optimal results for partitioning with respect to the maximum subdomain degree metric and the sum of external degrees metric. We show that for the bipartitioning case we can feasibly solve problems of up to 150 nodes with simultaneous replication in hundreds of seconds. For other partitioning metrics, we are able to solve problems up to 40 nodes in hundreds of seconds

    A survey of DA techniques for PLD and FPGA based systems

    Full text link
    Programmable logic devices (PLDs) are gaining in acceptance, of late, for designing systems of all complexities ranging from glue logic to special purpose parallel machines. Higher densities and integration levels are made possible by the new breed of complex PLDs and FPGAs. The added complexities of these devices make automatic computer aided tools indispensable for achieving good performance and a high usable gate-count. In this article, we attempt to present in an unified manner, the different tools and their underlying algorithms using an example of a vending machine controller as an illustrative example. Topics covered include logic synthesis for PLDs and FPGAs along with an in-depth survey of important technology mapping, partitioning and place and route algorithms for different FPGA architectures.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/31206/1/0000108.pd

    Partitioning a given circuit targeting multiple Fpgas

    Full text link
    Our approach to the problem of partitioning the design (represented as a hypergraph) into Multi-FPGAs uses a bi-level approach by initially clustering the design and then applying the bipartitioning technique iteratively. Each partition generated by the iterative bipartitioning technique should meet the constraints given by the FPGAs input-output and number of CLBs. The traditional FM partitioning can be applied to partition the circuit into multiple FPGAs. FM partitioning aims to minimize the number of interconnections but fails to group the nodes with maximum interconnections into one partition. Thus FM algorithm looks at the partitioning problem with a global viewpoint, abandoning the details. The proposed algorithm adds another level of optimization to the partitioning heuristic. By clustering the nodes that are connected very closely in a netlist before partitioning, local optimization property is added to the FM algorithm. This clustered circuit is then partitioned to implement the design in multiple FPGAs. Bipartitioning using the Fiduccia Mattheyses algorithm is applied. (Abstract shortened by UMI.)

    A configurable decoder for pin-limited applications

    Get PDF
    Pin limitation is the restriction imposed on an IC chip by the unavailability of a sufficient number of I/O pins. This impacts the design and performance of the chip, as the amount of information that can be passed through the boundary of the chip becomes limited. One area that would benefit from a reduction of the effect of pin limitation is reconfigurable architectures. In this work, we consider reconfigurable devices called Field Programmable Gate Arrays (FPGAs). Due to pin limitation, current FPGAs use a form of 1-hot decoder to select elements (one frame at a time) during partial reconfiguration. This results in a slow and coarse selection of elements for reconfiguration. We propose a module that performs a focused selection of only those elements that require reconfiguration. This reduces reconfiguration overheads and enables the speeds needed for dynamic reconfiguration. The problem is that of selecting subsets of an n-element set in a fast, focused and inexpensive manner. This thesis proposes such a configurable decoder that bridges the gap between the inexpensive, but inflexible, fixed 1-hot decoder, and the expensive, but flexible, pure LUT-based decoder. Our configurable decoder uses a LUT with a narrow output and a low cost in tandem with a special fixed decoder called a mapping unit that expands the output of the LUT to a desired n-bit output. We demonstrate several implementations of the mapping unit, each with different capabilities and trade-offs. A key result of this work is that for any gate cost G=O(n logk n) (where k is a constant), if a pure LUT-based solution produces λ independent subsets, then our method produces Ω(λ log n / log log n) independent subsets for the same cost. Our decoder also produces many more dependent subsets (that depend on the choice of the Ω( λ log n / log log n) independent subsets). We provide simulation results for the configurable decoder and predict future trends from the simulation data; these confirm the theoretical advantages of the proposed decoder. We illustrate the implementation of important subset classes on our configurable decoder and make key observations on a generalized variant
    • …
    corecore