722 research outputs found

    Mapping constrained optimization problems to quantum annealing with application to fault diagnosis

    Get PDF
    Current quantum annealing (QA) hardware suffers from practical limitations such as finite temperature, sparse connectivity, small qubit numbers, and control error. We propose new algorithms for mapping boolean constraint satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In particular we develop a new embedding algorithm for mapping a CSP onto a hardware Ising model with a fixed sparse set of interactions, and propose two new decomposition algorithms for solving problems too large to map directly into hardware. The mapping technique is locally-structured, as hardware compatible Ising models are generated for each problem constraint, and variables appearing in different constraints are chained together using ferromagnetic couplings. In contrast, global embedding techniques generate a hardware independent Ising model for all the constraints, and then use a minor-embedding algorithm to generate a hardware compatible Ising model. We give an example of a class of CSPs for which the scaling performance of D-Wave's QA hardware using the local mapping technique is significantly better than global embedding. We validate the approach by applying D-Wave's hardware to circuit-based fault-diagnosis. For circuits that embed directly, we find that the hardware is typically able to find all solutions from a min-fault diagnosis set of size N using 1000N samples, using an annealing rate that is 25 times faster than a leading SAT-based sampling method. Further, we apply decomposition algorithms to find min-cardinality faults for circuits that are up to 5 times larger than can be solved directly on current hardware.Comment: 22 pages, 4 figure

    Two-dimensional placement compaction using an evolutionary approach: a study

    Get PDF
    The placement problem of two-dimensional objects over planar surfaces optimizing given utility functions is a combinatorial optimization problem. Our main drive is that of surveying genetic algorithms and hybrid metaheuristics in terms of final positioning area compaction of the solution. Furthermore, a new hybrid evolutionary approach, combining a genetic algorithm merged with a non-linear compaction method is introduced and compared with referenced literature heuristics using both randomly generated instances and benchmark problems. A wide variety of experiments is made, and the respective results and discussions are presented. Finally, conclusions are drawn, and future research is defined

    FFTPL: An Analytic Placement Algorithm Using Fast Fourier Transform for Density Equalization

    Full text link
    We propose a flat nonlinear placement algorithm FFTPL using fast Fourier transform for density equalization. The placement instance is modeled as an electrostatic system with the analogy of density cost to the potential energy. A well-defined Poisson's equation is proposed for gradient and cost computation. Our placer outperforms state-of-the-art placers with better solution quality and efficiency

    An integrated placement and routing approach

    Get PDF
    As the feature size continues scaling down, interconnects become the major contributor of signal delay. Since interconnects are mainly determined by placement and routing, these two stages play key roles to achieve high performance. Historically, they are divided into two separate stages to make the problem tractable. Therefore, the routing information is not available during the placement process. Net models such as HPWL, are employed to approximate the routing to simplify the placement problem. However, the good placement in terms of these objectives may not be routable at all in the routing stage because different objectives are optimized in placement and routing stages. This inconsistancy makes the results obtained by the two-step optimization method far from optimal;In order to achieve high-quality placement solution and ensure the following routing, we propose an integrated placement and routing approach. In this approach, we integrate placement and routing into the same framework so that the objective optimized in placement is the same as that in routing. Since both placement and routing are very hard problems (NP-hard), we need to have very efficient algorithms so that integrating them together will not lead to intractable complexity;In this dissertation, we first develop a highly efficient placer - FastPlace 3.0 for large-scale mixed-size placement problem. Then, an efficient and effective detailed placer - FastDP is proposed to improve global placement by moving standard cells in designs. For high-degree nets in designs, we propose a novel performance-driven topology design algorithm to generate good topologies to achieve very strict timing requirement. In the routing phase, we develop two global routers, FastRoute and FastRoute 2.0. Compared to traditional global routers, they can generate better solutions and are two orders of magnitude faster. Finally, based on these efficient and high-quality placement and routing algorithms, we propose a new flow which integrates placement and routing together closely. In this flow, global routing is extensively applied to obtain the interconnect information and direct the placement process. In this way, we can get very good placement solutions with guaranteed routability

    Floorplan-guided placement for large-scale mixed-size designs

    Get PDF
    In the nanometer scale era, placement has become an extremely challenging stage in modern Very-Large-Scale Integration (VLSI) designs. Millions of objects need to be placed legally within a chip region, while both the interconnection and object distribution have to be optimized simultaneously. Due to the extensive use of Intellectual Property (IP) and embedded memory blocks, a design usually contains tens or even hundreds of big macros. A design with big movable macros and numerous standard cells is known as mixed-size design. Due to the big size difference between big macros and standard cells, the placement of mixed-size designs is much more difficult than the standard-cell placement. This work presents an efficient and high-quality placement tool to handle modern large-scale mixed-size designs. This tool is developed based on a new placement algorithm flow. The main idea is to use the fixed-outline floorplanning algorithm to guide the state-of-the-art analytical placer. This new flow consists of four steps: 1) The objects in the original netlist are clustered into blocks; 2) Floorplanning is performed on the blocks; 3) The blocks are shifted within the chip region to further optimize the wirelength; 4) With big macro locations fixed, incremental placement is applied to place the remaining objects. Several key techniques are proposed to be used in the first two steps. These techniques are mainly focused on the following two aspects: 1) Hypergraph clustering algorithm that can cut down the original problem size without loss of placement Quality of Results (QoR); 2) Fixed-outline floorplanning algorithm that can provide a good guidance to the analytical placer at the global level. The effectiveness of each key technique is demonstrated by promising experimental results compared with the state-of-the-art algorithms. Moreover, using the industrial mixed-size designs, the new placement tool shows better performance than other existing approaches

    Generic Connectivity-Based CGRA Mapping via Integer Linear Programming

    Full text link
    Coarse-grained reconfigurable architectures (CGRAs) are programmable logic devices with large coarse-grained ALU-like logic blocks, and multi-bit datapath-style routing. CGRAs often have relatively restricted data routing networks, so they attract CAD mapping tools that use exact methods, such as Integer Linear Programming (ILP). However, tools that target general architectures must use large constraint systems to fully describe an architecture's flexibility, resulting in lengthy run-times. In this paper, we propose to derive connectivity information from an otherwise generic device model, and use this to create simpler ILPs, which we combine in an iterative schedule and retain most of the exactness of a fully-generic ILP approach. This new approach has a speed-up geometric mean of 5.88x when considering benchmarks that do not hit a time-limit of 7.5 hours on the fully-generic ILP, and 37.6x otherwise. This was measured using the set of benchmarks used to originally evaluate the fully-generic approach and several more benchmarks representing computation tasks, over three different CGRA architectures. All run-times of the new approach are less than 20 minutes, with 90th percentile time of 410 seconds. The proposed mapping techniques are integrated into, and evaluated using the open-source CGRA-ME architecture modelling and exploration framework.Comment: 8 pages of content; 8 figures; 3 tables; to appear in FCCM 2019; Uses the CGRA-ME framework at http://cgra-me.ece.utoronto.ca

    Congestion Reduction in Traditional and New Routing Architectures

    Get PDF
    In dense integrated circuit designs, management of routing congestion is essential; an over congested design may be unroutable. Many factors influence congestion: placement, routing, and routing architecture all contribute. Previous work has shown that different placement tools can have substantially different demands for each routing layer; our objective is to develop methods that allow “tuning” of interconnect topologies to match routing resources. We focus on congestion minimization for both Manhattan and non-Manhattan routing architectures, and have two main contributions. First, we combine prior heuristics for non-Manhattan Steiner trees and Preferred Direction Steiner trees into a hybrid approach that can handle arbitrary routing directions, via minimization, and layer assignment of edges simultaneously. Second, we present an effective method to adjust Steiner tree topologies to match routing demand to resource, resulting in lower congestion and better routability
    corecore